title
stringlengths
1
827
uuid
stringlengths
36
36
pmc_id
stringlengths
5
8
search_term
stringclasses
18 values
text
stringlengths
0
8.42M
Do psychiatric patients experience more psychiatric symptoms during COVID-19 pandemic and lockdown? A case-control study with service and research implications for immunopsychiatry
c1a15f24-1760-43fb-b7dd-98ff47e787a2
7184991
Physiology[mh]
Introduction The 2019 coronavirus disease (COVID-19) is highly infectious and potentially fatal . Its psychological impact on persons with mental disorders remains unknown. In the absence of a cure or vaccine against COVID-19, lockdown, isolation, quarantine and limiting community interaction are main psychoneuroimmunity prevention strategies proposed by to reduce pathogen exposure . A recent study found that quarantine was not related with the prevalence of mental health problems in the general population . Nevertheless, the COVID-19 pandemic has a sudden and massive impact on health care infrastructure, transportation, daily activity, freedom of movement, and distribution of medical resources globally . The sudden changes could significantly impact the mental health of psychiatric patients as well as reducing their access to psychiatric services . Furthermore, mental health providers could be burnout as they might be deployed to look after COVID-19 patients . Rightfully, focus of health services has primarily been placed on COVID-19. However, we must also be mindful and ensure that psychiatric services are not neglected in the present pandemic. As the psychiatric inpatient wards were found to be perfect breeding ground for the coronavirus , most of the stable psychiatric patients should receive treatment at home to reduce the risk of infection. Managing psychiatric patients during the COVID-19 pandemic poses a variety of challenges for psychiatrists. Ideally, immunopsychiatry service should safeguard physical and mental health of psychiatric patients by providing telepsychiatry consultation, home delivery of medications, psychological support, rapid testing for coronavirus and monitoring inflammatory markers related to stress and depression during a large infection outbreak. The immunopsychiatry service addresses the biopsychosocial aspects of COVID-19 pandemic . Due to the sudden outbreak and lack of experience with COVID-19, most mental health services were unprepared to provide above services and not able to reach out to psychiatric patients during the lockdown. As a result, the needs of psychiatric patients are being neglected during the pandemic. Although there are few known studies to date about the effect of COVID-19 pandemic on the mental health of the general population , COVID patients , health professionals and workers who returned to work , there remains little research on the psychological impact and mental health of psychiatric patients living in the community during the COVID-19 pandemic. This study aimed to assess and compare the immediate stress and psychological impact experienced by people with and without psychiatric illnesses during the peak of the COVID-19 epidemic with strict lockdown measures. This information has the potential to uncover the differences in mental health needs in people with and without psychiatric illnesses during the pandemic and helps to develop a new immunopsychiatry service for future outbreak of infectious disease. We hypothesised that there were no differences between the levels of depression, anxiety, stress, and psychological impact encountered by people with and without psychiatric illnesses during the peak of the COVID-19 epidemic with strict lockdown measures. Methods 2.1 1 Participants An online questionnaire was administered via SMS to psychiatric patients from the databases of the First People’s Hospital of Chongqing Liang Jiang New Area, China. Due to lockdown measures, this study was conducted via electronic means because the local government prohibited face-to-face contact. The First People’s Hospital of Chongqing Liang Jiang New Area is a designated hospital for COVID-19, where 17 patients were reported infected when this study was conducted. The psychiatric patients were recruited from 19 to 21 February 2020, and healthy control participants were recruited from 21 to 22 February 2020. A short recruitment period allowed us to measure the psychological impact during the peak of the COVID-19 epidemic when strict lockdown measures for all people in the city were in place. The healthy control participants were recruited through convenient sampling. Written informed consent was obtained from all participants . The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration. All procedures involving human subjects/patients were approved by Ethics Review Committee of The First People’s Hospital of Chongqing Liang Jiang New Area (IRB No. 2020-02-001). 2.2 Inclusion and exclusion criteria The inclusion criteria were different for psychiatric patients and healthy controls. All psychiatric patients must be aged 18 years or above and be previously diagnosed by psychiatrists to suffer from F32 Major Depressive Disorder–single episode, F33 Major depressive disorder–recurrent episodes, F41 other anxiety disorders including generalised anxiety disorder, panic disorder, and F41·8 mixed anxiety and depressive disorder based on the 10th revision of the International Statistical Classification of Diseases and related Health Problems (ICD-10) criteria. Healthy control subjects were aged 18 years or above who did not have a history of psychiatric illnesses. Exclusion criteria included lack of mobile phone number and Internet access, inability to complete an online survey, presence of chronic medical disorders including neurological, cardiovascular, respiratory, endocrine and inflammatory disorders, suspected or confirmed cases of COVID-19. 2.3 Measures The structured questionnaire consisted of questions that covered several areas: (1) demographic data; (2) physical symptoms resembling COVID-19 infection and self-rating physical health status in the past 14 days; (3) Impact of Event Scale-Revised (IES-R), (4) Depression, Anxiety and Stress Scale (DASS-21), (5) The insomnia Severity Index (ISI), and (6) Other psychiatric symptoms. The psychological impact of the COVID-19 epidemic was measured using the Impact of Event Scale-Revised (IES-R) that measures post-traumatic stress disorder (PTSD) symptoms in survivorship during COVID-19 pandemic . The IES-R is a self-administered questionnaire that has been well-validated in the Chinese population for determining the extent of psychological impact after exposure to the public health crisis within one week of exposure . The total IES-R score was divided into 0–17 (normal), 18–23 (PTSD like symptoms) and >24 (diagnosis of PTSD).(Lee, Kang, Cho, Kim, and Park, 2018) Mental health status was measured using the Depression, Anxiety and Stress Scale (DASS-21) based on a tripartite model of psychopathology that comprise a general distress construct with distinct characteristics . DASS has been demonstrated to be a reliable and valid measure in assessing mental health in the Chinese population . DASS was previously used in research related to SARS and COVID-19 . The sleep quality of respondents was measured using the Insomnia Severity Index (ISI) . The total ISI score was divided into no clinically significant insomnia (0–7), subthreshold insomnia (8–14), moderately severe clinical insomnia (15–21) and severe clinical insomnia (22–28). 2.4 Statistical analysis Descriptive statistics were used to summarize the variables, mean and standard deviation were used for continuous variables, while frequency and percentage were used for categorical variables. Inferential statistics, including independent sample t -test, and Pearson’s Chi-square test, were used to examine if there was any difference in the outcome variables between the psychiatric patient and healthy subject groups. Multiple linear regression with a backward selection method was used to examine the association between the outcome variables and the two groups of subjects as well as the demographic variables. All the analyses were conducted using IBM SPSS Statistics 22, and the level of significance was set at 5%. 1 Participants An online questionnaire was administered via SMS to psychiatric patients from the databases of the First People’s Hospital of Chongqing Liang Jiang New Area, China. Due to lockdown measures, this study was conducted via electronic means because the local government prohibited face-to-face contact. The First People’s Hospital of Chongqing Liang Jiang New Area is a designated hospital for COVID-19, where 17 patients were reported infected when this study was conducted. The psychiatric patients were recruited from 19 to 21 February 2020, and healthy control participants were recruited from 21 to 22 February 2020. A short recruitment period allowed us to measure the psychological impact during the peak of the COVID-19 epidemic when strict lockdown measures for all people in the city were in place. The healthy control participants were recruited through convenient sampling. Written informed consent was obtained from all participants . The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration. All procedures involving human subjects/patients were approved by Ethics Review Committee of The First People’s Hospital of Chongqing Liang Jiang New Area (IRB No. 2020-02-001). Inclusion and exclusion criteria The inclusion criteria were different for psychiatric patients and healthy controls. All psychiatric patients must be aged 18 years or above and be previously diagnosed by psychiatrists to suffer from F32 Major Depressive Disorder–single episode, F33 Major depressive disorder–recurrent episodes, F41 other anxiety disorders including generalised anxiety disorder, panic disorder, and F41·8 mixed anxiety and depressive disorder based on the 10th revision of the International Statistical Classification of Diseases and related Health Problems (ICD-10) criteria. Healthy control subjects were aged 18 years or above who did not have a history of psychiatric illnesses. Exclusion criteria included lack of mobile phone number and Internet access, inability to complete an online survey, presence of chronic medical disorders including neurological, cardiovascular, respiratory, endocrine and inflammatory disorders, suspected or confirmed cases of COVID-19. Measures The structured questionnaire consisted of questions that covered several areas: (1) demographic data; (2) physical symptoms resembling COVID-19 infection and self-rating physical health status in the past 14 days; (3) Impact of Event Scale-Revised (IES-R), (4) Depression, Anxiety and Stress Scale (DASS-21), (5) The insomnia Severity Index (ISI), and (6) Other psychiatric symptoms. The psychological impact of the COVID-19 epidemic was measured using the Impact of Event Scale-Revised (IES-R) that measures post-traumatic stress disorder (PTSD) symptoms in survivorship during COVID-19 pandemic . The IES-R is a self-administered questionnaire that has been well-validated in the Chinese population for determining the extent of psychological impact after exposure to the public health crisis within one week of exposure . The total IES-R score was divided into 0–17 (normal), 18–23 (PTSD like symptoms) and >24 (diagnosis of PTSD).(Lee, Kang, Cho, Kim, and Park, 2018) Mental health status was measured using the Depression, Anxiety and Stress Scale (DASS-21) based on a tripartite model of psychopathology that comprise a general distress construct with distinct characteristics . DASS has been demonstrated to be a reliable and valid measure in assessing mental health in the Chinese population . DASS was previously used in research related to SARS and COVID-19 . The sleep quality of respondents was measured using the Insomnia Severity Index (ISI) . The total ISI score was divided into no clinically significant insomnia (0–7), subthreshold insomnia (8–14), moderately severe clinical insomnia (15–21) and severe clinical insomnia (22–28). Statistical analysis Descriptive statistics were used to summarize the variables, mean and standard deviation were used for continuous variables, while frequency and percentage were used for categorical variables. Inferential statistics, including independent sample t -test, and Pearson’s Chi-square test, were used to examine if there was any difference in the outcome variables between the psychiatric patient and healthy subject groups. Multiple linear regression with a backward selection method was used to examine the association between the outcome variables and the two groups of subjects as well as the demographic variables. All the analyses were conducted using IBM SPSS Statistics 22, and the level of significance was set at 5%. Results 3.1 Demographic characteristics of respondents with and without psychiatric illnesses shows the demographic characteristics of the study respondents with and without psychiatric illnesses . Of the 666 psychiatric patients who were approached, 76 completed the survey. The total response rate was 11·3%. Of the 130 healthy controls who were approached, 109 completed the survey. The total response rate was 83·8%. The respondents consisted of 76 psychiatric patients and 109 healthy controls who were age and gender matched. The majority of respondents from both groups were women, staying in a household with 3 to 5 family members with a mean age of 32 years. There was a significantly higher proportion of healthy controls who held an undergraduate degree (61.5%) as compared to psychiatric patients (35.5%) (p < 0.001). Significantly more psychiatric patients reported physical symptoms similar to COVID-19 (30.3% vs 5.5%) and poor or worse physical health (9.2% vs 2.8%) as compared to healthy controls (p < 0.001). For psychiatric patients, majority of the respondents had F41.8 mixed anxiety and depressive disorder (59%), followed by F41 other anxiety disorders (25%) and F32, and F33 Major depressive disorder (16%). 3.2 IES-R and DASS scores of respondents with and without psychiatric illnesses shows the IES-R, DASS-21 and ISI scores of the study respondents. The mean-IES-R score of psychiatric patients (17.7 ± 14.2) was significantly higher than healthy controls (11.3 ± 10.1) (p < 0.001). Thirty-three (43.4%) psychiatric patients and 30 (27.5%) healthy controls received a score of 18 or higher in keeping with clinically significant of PTSD-like symptoms. There were significantly more psychiatric patients reporting PTSD-like symptoms as compared to healthy controls (p = 0.025). Twenty-four (31.6%) psychiatric patients and 15 (13.8%) healthy controls received a score of 24 or higher, indicating the fulfilment of diagnostic criteria for PTSD. There were significantly more psychiatric patients fulfilling the diagnostic criteria of PTSD as compared to healthy controls (p = 0.03). The mean DASS-21 anxiety score of psychiatric patients (6.6 ± 9.0) was significantly higher than healthy controls (1.5 ± 2.7) (p < 0.001). Eighteen (23.6%) psychiatric patients and 3 (2.7%) healthy controls received a score of 10 or higher on the anxiety subscale, indicating the presence of moderate to severe anxiety symptoms. There were significantly more psychiatric patients reporting anxiety symptoms as compared to healthy controls (p < 0.001). The mean DASS-21 depression score of psychiatric patients (8.3 ± 10.3) was significantly higher than healthy controls (2.2 ± 3.5) (p < 0.001). Seventeen (22.4%) psychiatric patients and 1 (0.9%) healthy control received a score of 14 or higher on the depression subscale, indicating the presence of moderate to severe depressive symptoms. There were significantly more psychiatric patients reporting depressive symptoms as compared to healthy controls (p < 0.001). The mean DASS-21 stress score of psychiatric patients (8.0 ± 9.8) was significantly higher than healthy controls (2.7 ± 4.2) (p < 0.001). Thirteen (17%) psychiatric patients and 1 (0.9%) healthy control received a score of 19 or higher on the stress subscale, indicating the presence of moderate to severe stress symptoms. There were significantly more psychiatric patients reporting stress symptoms as compared to healthy controls (p < 0.001). The mean ISI score of psychiatric patients (10.1 ± 7.16) was significantly higher than healthy controls (4.63 ± 4.04) (p < 0.001). Twenty (27.6%) psychiatric patients and 1 (0.9%) healthy control receive a score of 15 or higher, indicating the presence of moderately severe to severe clinical insomnia. There were significantly more psychiatric patients reporting moderately severe to severe clinical insomnia as compared to healthy controls (p < 0.001). 3.3 Other psychiatric symptoms of respondents during the COVID-19 epidemic shows other psychiatric symptoms reported by respondents during the COVID-19 epidemic. Twenty-two (29%) psychiatric patients and 5 (4.6%) healthy controls reported moderate to severe worries about their physical health. There were significantly more psychiatric patients reporting moderate to severe worries about their physical health as compared to healthy controls (p < 0.001). Sixteen (21%) psychiatric patients and 1 (0.9%) healthy control reported moderate to severe anger and impulsivity. There were significantly more psychiatric patients reporting moderate to severe anger and impulsivity as compared to healthy controls (p < 0.001). Nine (11.8%) psychiatric patients and 1 (0.9%) healthy control reported moderate to severe suicidal ideation. There were no significant differences in the rate of discrimination, auditory hallucination, paranoid idea, alcohol use, and intention to harm others between psychiatric patients and healthy controls (p > 0.05). 3.4 Factors associated with the psychological impact of respondents during the COVID-19 epidemic shows the results of linear regression analysis between psychological impact and self-reported health status and history of psychiatric illnesses in all respondents, with adjustment to demographic factors. Respondents who reported recent physical symptoms in the past fourteen days were significantly associated with higher mean DASS anxiety subscale scores (B = 3.956, 95% CI: 1.438 6.475, p = 0.002) and DASS stress subscale scores ( B = 3.352, 95% CI: 0.368–6.335, p = 0.028) as compared to respondents who did not report any recent physical symptom in the past 14 days. Respondents who reported no change, poor or worse physical health status was significantly more likely to endorse higher mean IES-R scores (B = 6.245, 95% CI: 2.677–9.813, p = 0.001), DASS depression subscale scores (B = 2.931, 95% CI: 1.105–4.756, p = 0.002), DASS anxiety subscale scores (B = 4.202, 95% CI: 2.093–6.312, p < 0.001), DASS stress subscale scores (B = 3.766, 95% CI: 1.604–5.929, p = 0.001), and ISI score (B = 3.545, 95% CI: 1.923–5.168, p < 0.001) as compared to respondents who reported healthier or better health. Respondents with psychiatric illnesses were significantly more likely to endorse higher mean IES-R scores (B = 4.450, 95% CI: 0.852–8.048, p = 0.016), DASS depression subscale scores (B = 3.223, 95% CI: 1.385–5.061, p = 0.001), mean DASS anxiety subscale scores (B = 4.871, 95 CI: 2.74–6.998, p < 0.001), mean DASS stress subscale scores (B = 3.311, 95% CI: 1.133–5.488, p = 0.003), and ISI scores (B = 4.386, 95% CI: 2.749–6.022, p < 0.001) as compared to respondents without psychiatric illnesses. Other demographic factors inclusive of age, gender, education level, and household size were not associated with differences in mean IES-R, DASS depression subscale, DASS anxiety subscale, DASS stress subscale, and ISI scores (p > 0.05). Demographic characteristics of respondents with and without psychiatric illnesses shows the demographic characteristics of the study respondents with and without psychiatric illnesses . Of the 666 psychiatric patients who were approached, 76 completed the survey. The total response rate was 11·3%. Of the 130 healthy controls who were approached, 109 completed the survey. The total response rate was 83·8%. The respondents consisted of 76 psychiatric patients and 109 healthy controls who were age and gender matched. The majority of respondents from both groups were women, staying in a household with 3 to 5 family members with a mean age of 32 years. There was a significantly higher proportion of healthy controls who held an undergraduate degree (61.5%) as compared to psychiatric patients (35.5%) (p < 0.001). Significantly more psychiatric patients reported physical symptoms similar to COVID-19 (30.3% vs 5.5%) and poor or worse physical health (9.2% vs 2.8%) as compared to healthy controls (p < 0.001). For psychiatric patients, majority of the respondents had F41.8 mixed anxiety and depressive disorder (59%), followed by F41 other anxiety disorders (25%) and F32, and F33 Major depressive disorder (16%). IES-R and DASS scores of respondents with and without psychiatric illnesses shows the IES-R, DASS-21 and ISI scores of the study respondents. The mean-IES-R score of psychiatric patients (17.7 ± 14.2) was significantly higher than healthy controls (11.3 ± 10.1) (p < 0.001). Thirty-three (43.4%) psychiatric patients and 30 (27.5%) healthy controls received a score of 18 or higher in keeping with clinically significant of PTSD-like symptoms. There were significantly more psychiatric patients reporting PTSD-like symptoms as compared to healthy controls (p = 0.025). Twenty-four (31.6%) psychiatric patients and 15 (13.8%) healthy controls received a score of 24 or higher, indicating the fulfilment of diagnostic criteria for PTSD. There were significantly more psychiatric patients fulfilling the diagnostic criteria of PTSD as compared to healthy controls (p = 0.03). The mean DASS-21 anxiety score of psychiatric patients (6.6 ± 9.0) was significantly higher than healthy controls (1.5 ± 2.7) (p < 0.001). Eighteen (23.6%) psychiatric patients and 3 (2.7%) healthy controls received a score of 10 or higher on the anxiety subscale, indicating the presence of moderate to severe anxiety symptoms. There were significantly more psychiatric patients reporting anxiety symptoms as compared to healthy controls (p < 0.001). The mean DASS-21 depression score of psychiatric patients (8.3 ± 10.3) was significantly higher than healthy controls (2.2 ± 3.5) (p < 0.001). Seventeen (22.4%) psychiatric patients and 1 (0.9%) healthy control received a score of 14 or higher on the depression subscale, indicating the presence of moderate to severe depressive symptoms. There were significantly more psychiatric patients reporting depressive symptoms as compared to healthy controls (p < 0.001). The mean DASS-21 stress score of psychiatric patients (8.0 ± 9.8) was significantly higher than healthy controls (2.7 ± 4.2) (p < 0.001). Thirteen (17%) psychiatric patients and 1 (0.9%) healthy control received a score of 19 or higher on the stress subscale, indicating the presence of moderate to severe stress symptoms. There were significantly more psychiatric patients reporting stress symptoms as compared to healthy controls (p < 0.001). The mean ISI score of psychiatric patients (10.1 ± 7.16) was significantly higher than healthy controls (4.63 ± 4.04) (p < 0.001). Twenty (27.6%) psychiatric patients and 1 (0.9%) healthy control receive a score of 15 or higher, indicating the presence of moderately severe to severe clinical insomnia. There were significantly more psychiatric patients reporting moderately severe to severe clinical insomnia as compared to healthy controls (p < 0.001). Other psychiatric symptoms of respondents during the COVID-19 epidemic shows other psychiatric symptoms reported by respondents during the COVID-19 epidemic. Twenty-two (29%) psychiatric patients and 5 (4.6%) healthy controls reported moderate to severe worries about their physical health. There were significantly more psychiatric patients reporting moderate to severe worries about their physical health as compared to healthy controls (p < 0.001). Sixteen (21%) psychiatric patients and 1 (0.9%) healthy control reported moderate to severe anger and impulsivity. There were significantly more psychiatric patients reporting moderate to severe anger and impulsivity as compared to healthy controls (p < 0.001). Nine (11.8%) psychiatric patients and 1 (0.9%) healthy control reported moderate to severe suicidal ideation. There were no significant differences in the rate of discrimination, auditory hallucination, paranoid idea, alcohol use, and intention to harm others between psychiatric patients and healthy controls (p > 0.05). Factors associated with the psychological impact of respondents during the COVID-19 epidemic shows the results of linear regression analysis between psychological impact and self-reported health status and history of psychiatric illnesses in all respondents, with adjustment to demographic factors. Respondents who reported recent physical symptoms in the past fourteen days were significantly associated with higher mean DASS anxiety subscale scores (B = 3.956, 95% CI: 1.438 6.475, p = 0.002) and DASS stress subscale scores ( B = 3.352, 95% CI: 0.368–6.335, p = 0.028) as compared to respondents who did not report any recent physical symptom in the past 14 days. Respondents who reported no change, poor or worse physical health status was significantly more likely to endorse higher mean IES-R scores (B = 6.245, 95% CI: 2.677–9.813, p = 0.001), DASS depression subscale scores (B = 2.931, 95% CI: 1.105–4.756, p = 0.002), DASS anxiety subscale scores (B = 4.202, 95% CI: 2.093–6.312, p < 0.001), DASS stress subscale scores (B = 3.766, 95% CI: 1.604–5.929, p = 0.001), and ISI score (B = 3.545, 95% CI: 1.923–5.168, p < 0.001) as compared to respondents who reported healthier or better health. Respondents with psychiatric illnesses were significantly more likely to endorse higher mean IES-R scores (B = 4.450, 95% CI: 0.852–8.048, p = 0.016), DASS depression subscale scores (B = 3.223, 95% CI: 1.385–5.061, p = 0.001), mean DASS anxiety subscale scores (B = 4.871, 95 CI: 2.74–6.998, p < 0.001), mean DASS stress subscale scores (B = 3.311, 95% CI: 1.133–5.488, p = 0.003), and ISI scores (B = 4.386, 95% CI: 2.749–6.022, p < 0.001) as compared to respondents without psychiatric illnesses. Other demographic factors inclusive of age, gender, education level, and household size were not associated with differences in mean IES-R, DASS depression subscale, DASS anxiety subscale, DASS stress subscale, and ISI scores (p > 0.05). Discussion The main results of the present study indicate that during the peak of the COVID-19 epidemic with strict lockdown measures, psychiatric patients scored significantly higher on the total IES-R, DASS-21 anxiety, depression, and stress subscales and, total ISI scores. More than one-quarter of psychiatric patients reported PTSD-like symptoms and moderate to severe insomnia. Psychiatric patients were significantly more likely to report worries about their physical health, anger, impulsivity, and suicidal ideation. Respondents who reported no change, poor or worse physical health status and had psychiatric illnesses were significantly more likely to endorse higher mean IES-R, DASS depression, anxiety, and stress subscale, and ISI scores. Our findings rejected the original null hypothesis that there were no differences between the levels of depression, anxiety and stress and psychological impact encountered by people with and without psychiatric illnesses during the peak of COVID-19 epidemic with strict quarantine measures. To our best knowledge, this is the first study assessing the psychological impact on psychiatric patients and healthy controls during the peak of the COVID-19 epidemic when strict lockdown measures were in place for the entire city. Our findings identify potential targets of assessment and care for psychiatric patients as part of the new immunopsychiatry service during a pandemic. Our results can be used as a reference for mental health professionals and authorities for a future outbreak of infectious disease. The COVID-19 epidemic is highly contagious and has caused large-scale lockdown worldwide. This epidemic has resulted in relatively greater psychological distress in psychiatric patients. From the viewpoints of immunopsychiatry service, psychiatric patients were more likely to report moderate to severe worries about their physical health due to the concern that they might have unknowingly contracted the virus and perhaps less effective coping strategies . As a result, immunopsychiatry service should offer point-of-care test for the detection of COVID-19 and negative findings can offer reassurance to psychiatric patients. Contributing factors to worsening mental health were likely delays in delivery of psychotropic medications, lack of access to primary care or outpatient clinics, increased financial difficulty, personal concern of contracting COVID-19, long duration of staying at home as well as more impoverished living conditions due to shortage of supplies in the weeks following the outbreak. These changes in circumstances might lead to feelings of hopelessness and increased suicidal ideation among psychiatric patients. People with psychiatric illnesses were significantly more likely to endorse higher levels of PTSD, depression, anxiety, stress, and insomnia scores. Psychiatric patients might encounter a reduction in mental health services during the COVID-19 epidemic. Multiple factors caused a reduction in service. First, immediate mental health care needs of psychiatric patients were a lower priority when the number of COVID-19 cases rose sharply in the city. Second, psychiatric patients were encouraged not to visit the hospital as health services were devoted to managing terminally ill patients and suspected or confirmed cases of COVID-19. Third, the lockdown measures made it difficult for patients to see psychiatrists and other mental health care providers due to insufficient healthcare resources along with fear of contracting COVID-19 in hospitals which managed patients infected by COVID-19. Our findings emphasise the need for a new immunopsychiatry service during COVID-19 pandemic to disseminate management plans with psychiatric patients via telepsychiatry due to lockdown measures, including people who have not hitherto contracted COVID-19. After the COVID-19 epidemic, mental health preparedness and anticipation of future outbreaks will lead to an increased awareness of the needs of psychiatric patients and contingency plans to be put in place. Telepsychiatry emergency services or hotline should be made available to patients with intense suicidal ideation. Improved access to telepsychiatry services, home delivery of psychotropic medications, online psychiatric first-aid resources, and infectious disease outbreak preparedness play a pivotal role in minimising the severity of psychiatric symptoms experienced by psychiatric patients. As depression and stress are associated with an increase of pro-inflammatory cytokines including Interleukin-1 beta (IL-1β) , Interleukin-6 (IL-6) , Tumor Necrosis Factor-α (TNF-α) and C-reactive protein (CRP) , future immunopsychiatry service and research should monitor the relationship between levels of pro-inflammatory cytokines and depression in psychiatric patients during the pandemic. Similarly, PTSD is associated with enhanced interleukin-6 response to mental stress . If pro-inflammatory cytokines were found to be increased in psychiatric patients during lockdown, further research is required to evaluate pharmacological intervention and non-pharmacological intervention (e.g. physical activity) to reduce pro-inflammatory cytokines . Self-reported poorer or worse physical health status was significantly and negatively associated with higher levels of PTSD, depression, anxiety, stress, and insomnia scores. During the COVID-19 pandemic, the general public was found to spend more time at home . Telepsychiatry and smartphone-based behaviour therapy should focus on relaxation exercises to counteract anxiety, PTSD-like symptoms, anger, and irritability. Sleep hygiene can improve sleep quality and circadian rhythm as part of the psychoneuroimmunity preventive strategies . Activity scheduling (e.g., home-based exercise) can improve physical health status in the home environment . Further research is required to evaluate the effectiveness of these psychoneuroimmunity preventive strategies to enhance resilience . Based on our findings, psychiatric patients expressed significantly higher levels of worries about their physical health. Telepsychiatry and smartphone-based cognitive therapy can challenge cognitive biases where psychiatric patients tend to overestimate the risk of contracting from COVID-19 or underestimate their physical health status In this study, there are several negative findings between people with and without psychiatric illnesses that require further interpretation. Previous studies have shown widespread discrimination against people with psychiatric illnesses in China . In this study, respondents with psychiatric illnesses did not experience additional discrimination during the COVID-19 epidemic. One possible explanation was that society held more negative views towards COVID-19 as compared to psychiatric illnesses during an outbreak of a life-threatening infection. Respondents with psychiatric illnesses did not show an increase in alcohol intake as compared to healthy control groups. This observation is different from the previous study, which reported the increase in alcohol intoxication and abuse after natural disasters (e.g., earthquake) . As the government implemented outing restriction during the COVID-19 epidemic for all citizens, people with and without psychiatric illnesses did not have frequent access to purchase alcohol from local supermarkets. Also, entertainment venues, bars, and restaurants were ordered to cease operation and these measures further reduce alcohol intake of psychiatric patients and healthy controls. Levels of PTSD symptoms, depression, anxiety, stress, and insomnia were not related to educational level, age and gender, indicating that all sectors of the community were adversely affected. There are several limitations in the present study. First, there are limitations in generalising the sample of psychiatric patients as they suffered from non-psychotic psychiatric disorders, which was due to a restricted sampling during COVID-19 epidemic with strict lockdown measures. Their capacity to complete an online questionnaire attested to the fact that the psychiatric patients were less severely ill. Second, we were not able to obtain biological samples such as levels of pro-inflammatory cytokines due to lockdown. Third, the sampling of this study was voluntary and conducted online when strict lockdown measures were in place. Psychiatric patients who did not have access to emails and the Internet were excluded and resulted in a low response rate. Fourth, this was a cross-sectional study, and we could not demonstrate the cause and effect relationship between self-perceived health status, underlying psychiatric condition, and psychological impact. Additionally, this study was performed in only one hospital and might not reflect trends seen throughout China. Despite these limitations, this is the first study that examined the psychological impact on people with and without psychiatric illnesses in a city severely affected by the COVID-19 epidemic with strict lockdown measures. Conclusion To our knowledge, this is the first cross-sectional study that compared the prevalence of psychiatric symptoms between people with and without psychiatric illnesses during the COVID-19 pandemic. Our findings will serve as a reference for mental health professionals and institutions in other countries as the COVID-19 pandemic is ongoing. The results of this study suggest that psychiatric patients were at a higher risk of displaying higher levels of symptoms of PTSD, depression, anxiety, stress and insomnia, worries about physical health, anger and irritability and suicidal ideation as compared to healthy controls. From immunopsychiatry service viewpoints, there should be more awareness regarding psychiatric patients as targets for care with continuous psychiatric intervention during the pandemic of life-threatening infectious diseases. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Developing clinical decision tools to implement chronic disease prevention and screening in primary care: the BETTER 2 program (building on existing tools to improve chronic disease prevention and screening in primary care)
3b0bdd14-a07d-4f9c-8575-407457c59a9d
4523915
Preventive Medicine[mh]
Context The prevalence of chronic disease is steadily increasing , and primary care is an ideal setting for chronic disease prevention and screening (CDPS) activities . Regrettably, evidence-based tools and strategies for CDPS are inconsistently applied in the primary care setting, in part due to the numerous and sometimes conflicting recommendations and guidelines . Since 45 % of people have one or more chronic disease , primary care providers need effective strategies that address multiple conditions. However, guidelines are focused on specific conditions or risk factors , which makes it difficult for clinicians to address patients’ unique risk profiles. Thus, a comprehensive evidence-based approach to CDPS has been ‘lost in translation’ and there is a need to engage end-users including clinicians, researchers and policymakers in a collaborative process to address this knowledge to action gap. Furthermore, with the competing demands on primary care providers there is little time to address CDPS ; hence, a new approach that bridges the evidence to practice gap in primary care CDPS is needed. The B uilding on E xisting T ools t o Improv e Chronic Disease P r evention and Screening in Family Practice (BETTER) trial was a pragmatic two-way factorial cluster randomized controlled trial conducted in urban primary care team practices in Alberta and Ontario, Canada . Patients aged 40–65 were invited to participate in the trial and stratified into groups: 1) general medical patients and 2) patients with moderate mental illness. The BETTER tool kit and training provided the ‘prevention practitioners’ (PP) in the two urban settings with the necessary tools and resources to evaluate patients for multiple risks. The tools were aimed to prevent multiple chronic conditions through a process of shared decision-making, which provided the patient with an individualized ‘prevention prescription’ that included actionable CDPS goals (see Fig. ). The PPs in the BETTER trial were clinicians (licensed practical nurse, nurse, dietician, nurse practitioner) who worked in the multidisciplinary primary care clinics to develop a comprehensive approach to evidence-based CDPS within the practice setting . To bridge the gap between knowledge and practice, a BETTER trial clinical working group (CWG) was formed with end-users that included PPs, clinicians and researchers with support from the Centre for Effective Practice . The CWG identified and harmonized high-quality clinical practice guidelines and tools for primary CDPS in adults 40–65 years of age , creating the BETTER tool kit that was implemented in the BETTER trial. This extensive review also defined the scope of the BETTER trial; the chronic diseases with the best evidence for primary prevention and screening were identified and incorporated into the comprehensive approach to CDPS. The target conditions included in the BETTER trial were cancer (breast, cervical, and colorectal), cardiovascular disease, diabetes and their associated lifestyle factors (smoking, alcohol consumption, diet/nutrition and physical activity). The BETTER trial demonstrated the effectiveness of a shared decision-making approach to CDPS that improved the implementation of clinically important CDPS activities through the new skilled role of a ‘prevention practitioner’ in both patient strata . Once the BETTER trial demonstrated that a PP could improve the implementation of clinically important CDPS actions in a cost-effective manner , further funding was obtained to broaden the reach to other jurisdictions including urban, rural and remote communities in Canada and deepen the impact of the intervention through the B uilding on E xisting T ools t o Improv e Chronic Disease P r evention and Screening in Primary Care (BETTER 2) program . To achieve this goal and bridge the evidence to practice gap, revisions to the BETTER trial kit were required in order to update the evidence and adapt the tools into a format that could be used in diverse primary care settings including rural and remote settings and with aboriginal populations. This process was, in part, informed by feedback received from the participating clinicians and patients in the BETTER trial as it indicated that some of the items could be modified or removed, while others, such as the family history, could be better formatted to improve data capture. We describe here the process of integrated knowledge translation that involved engaging end-users from the various practice settings (including clinicians and policymakers) with researchers as equal partners in this knowledge synthesis and the development of the resulting BETTER 2 tools. Purpose The purpose of this paper is to describe 1) the integrated process used to adapt and refine the BETTER trial tools for chronic disease prevention and screening (CDPS) by the BETTER 2 program and 2) the resultant tools that were then implemented into various urban, rural, remote and aboriginal primary care settings by the BETTER 2 program. The prevalence of chronic disease is steadily increasing , and primary care is an ideal setting for chronic disease prevention and screening (CDPS) activities . Regrettably, evidence-based tools and strategies for CDPS are inconsistently applied in the primary care setting, in part due to the numerous and sometimes conflicting recommendations and guidelines . Since 45 % of people have one or more chronic disease , primary care providers need effective strategies that address multiple conditions. However, guidelines are focused on specific conditions or risk factors , which makes it difficult for clinicians to address patients’ unique risk profiles. Thus, a comprehensive evidence-based approach to CDPS has been ‘lost in translation’ and there is a need to engage end-users including clinicians, researchers and policymakers in a collaborative process to address this knowledge to action gap. Furthermore, with the competing demands on primary care providers there is little time to address CDPS ; hence, a new approach that bridges the evidence to practice gap in primary care CDPS is needed. The B uilding on E xisting T ools t o Improv e Chronic Disease P r evention and Screening in Family Practice (BETTER) trial was a pragmatic two-way factorial cluster randomized controlled trial conducted in urban primary care team practices in Alberta and Ontario, Canada . Patients aged 40–65 were invited to participate in the trial and stratified into groups: 1) general medical patients and 2) patients with moderate mental illness. The BETTER tool kit and training provided the ‘prevention practitioners’ (PP) in the two urban settings with the necessary tools and resources to evaluate patients for multiple risks. The tools were aimed to prevent multiple chronic conditions through a process of shared decision-making, which provided the patient with an individualized ‘prevention prescription’ that included actionable CDPS goals (see Fig. ). The PPs in the BETTER trial were clinicians (licensed practical nurse, nurse, dietician, nurse practitioner) who worked in the multidisciplinary primary care clinics to develop a comprehensive approach to evidence-based CDPS within the practice setting . To bridge the gap between knowledge and practice, a BETTER trial clinical working group (CWG) was formed with end-users that included PPs, clinicians and researchers with support from the Centre for Effective Practice . The CWG identified and harmonized high-quality clinical practice guidelines and tools for primary CDPS in adults 40–65 years of age , creating the BETTER tool kit that was implemented in the BETTER trial. This extensive review also defined the scope of the BETTER trial; the chronic diseases with the best evidence for primary prevention and screening were identified and incorporated into the comprehensive approach to CDPS. The target conditions included in the BETTER trial were cancer (breast, cervical, and colorectal), cardiovascular disease, diabetes and their associated lifestyle factors (smoking, alcohol consumption, diet/nutrition and physical activity). The BETTER trial demonstrated the effectiveness of a shared decision-making approach to CDPS that improved the implementation of clinically important CDPS activities through the new skilled role of a ‘prevention practitioner’ in both patient strata . Once the BETTER trial demonstrated that a PP could improve the implementation of clinically important CDPS actions in a cost-effective manner , further funding was obtained to broaden the reach to other jurisdictions including urban, rural and remote communities in Canada and deepen the impact of the intervention through the B uilding on E xisting T ools t o Improv e Chronic Disease P r evention and Screening in Primary Care (BETTER 2) program . To achieve this goal and bridge the evidence to practice gap, revisions to the BETTER trial kit were required in order to update the evidence and adapt the tools into a format that could be used in diverse primary care settings including rural and remote settings and with aboriginal populations. This process was, in part, informed by feedback received from the participating clinicians and patients in the BETTER trial as it indicated that some of the items could be modified or removed, while others, such as the family history, could be better formatted to improve data capture. We describe here the process of integrated knowledge translation that involved engaging end-users from the various practice settings (including clinicians and policymakers) with researchers as equal partners in this knowledge synthesis and the development of the resulting BETTER 2 tools. The purpose of this paper is to describe 1) the integrated process used to adapt and refine the BETTER trial tools for chronic disease prevention and screening (CDPS) by the BETTER 2 program and 2) the resultant tools that were then implemented into various urban, rural, remote and aboriginal primary care settings by the BETTER 2 program. The BETTER trial CWG identified and reviewed high-quality clinical practice guidelines and harmonized them to standardize the recommendations for implementation into the BETTER trial . The CWG considered strong evidence that was linked to a target or health outcome and created the knowledge products for the PP intervention that were used in the trial . The knowledge products were developed from November 2009 to March 2010 through a structured approach to evidence integration involving a knowledge to action cycle (Fig. ) . This involved engaging the researchers with end-users and policymakers in a process that included knowledge synthesis and harmonization through a structured evidence review process, and then testing and applying the tools in the various practice settings through an iterative plan-do-study-act (PDSA) process. Findings from the local PDSA activities were then integrated into the tools. The BETTER 2 CWG was convened in November 2012; this group included end-users, clinicians (family physicians, registered nurses, nurse practitioners), policymakers from the new jurisdictions (Northwest Territories, Newfoundland and Labrador) and researchers tasked with reviewing and updating the high-quality recommendations for primary prevention of chronic conditions in patients 40–65 years of age. The chronic conditions included cardiovascular disease, diabetes and breast, colorectal, lung and cervical cancer, as well as the associated lifestyle risk factors (e.g. tobacco use, alcohol overuse, poor diet and physical inactivity) . A targeted search using the process described in our previous publication was conducted to identify new resources meeting any of the following criteria: Date of publication subsequent to 2009; Addressing a gap or special population not considered in the original BETTER trial search; Interventions strongly recommended for application in practice; Recommendations for patients at higher risk due to family history; New resources identified through scoping reviews of provincial and territorial recommendations. The CWG was divided into teams focusing on the following conditions: breast cancer, cervical cancer, colorectal cancer, skin cancer, lung cancer, cardiovascular disease, diabetes, alcohol, mental health, lifestyle (tobacco, alcohol, nutrition and physical activity), obesity (waist circumference, BMI) and family history. Scoping reviews of provincial and territorial recommendations were also conducted to assist with further tailoring of the tools to comply with the approaches to CDPS in participating Canadian provincial or territorial jurisdictions (Alberta, Ontario, Northwest Territories, Newfoundland and Labrador). For example, a decision was made to reduce the consumption thresholds recommended in Canada’s low-risk alcohol drinking guidelines as the Northwest Territories was concerned with the high rates of colorectal cancer in their jurisdiction concomitant with heavy drinking and the potential for increased cancer risk in those exceeding the alcohol levels recommended by the Canadian Cancer Society . The CWG concluded that the Canadian guidelines were more focused on the risk of developing an alcohol use disorder and therefore did not adequately inform individuals about lower alcohol consumption levels to reduce the risk for chronic disease such as cancer. The tools were adapted to focus on informing Canadians about safer levels of alcohol consumption to prevent chronic disease, an approach that has since been recommended by the Canadian Cancer Society . The members of the BETTER 2 CWG individual topic teams met, independently reviewed and critiqued the new information identified and presented their assessments to the CWG for review by the entire group in November 2012, December 2012 and January 2013. To address gaps and build on the previous work, a greater emphasis was placed on tools that would facilitate family history assessment, address local disease and risk factor prevalence and help harness local resources. Following this, the BETTER tools were updated, reviewed, edited and tested by various members of the BETTER 2 CWG to determine if they were useful and appropriate in the various clinical settings. The review and BETTER 2 tool kit refinement was completed in January 2013 after which training sessions were held with CWG members and PPs to implement the updated tools into the various practice settings. As a result of the comprehensive work of the BETTER 2 CWG, the tools were updated and adapted to address CDPS in the various urban, rural, remote and aboriginal contexts. The following tools were refined for inclusion in the BETTER 2 tool kit: a patient health survey (Additional files and ), a CDPS care map (Additional file ), a prevention visit form (Additional file ), the bubble diagrams (Additional file ) and the prevention prescription with goals (Additional file ). The tool kit can be accessed on the BETTER website . This toolkit was the foundation for the comprehensive approach to CDPS implemented in participating jurisdictions and was further customized for each practice setting through the identification of local, regional and national resources that could be harnessed to support patients’ CDPS care plans and lifestyle change goals. Using the BETTER tools The patient survey (Additional files and ) and prevention visit form (Additional file ) capture the patient information and characteristics needed to make CDPS recommendations. The CDPS care map (Additional file ), informed by the aforementioned data collection instruments, is a clinical decision aid that helps the clinician determine which CDPS recommendations the patient is eligible to receive when certain criteria a met, including when to refer the patient back to their primary care provider. The bubble diagrams (Additional file ) are also instructive to both the clinician and patient as to the CDPS activities a patient is eligible to receive and can be used to facilitate agenda setting with the patient. The prevention prescription (Additional file ) is a document intended for the patients to take with them when they leave the visit to help inform them of their prevention and screening status and guide patients on when, where and how they will go about improving deficient CDPS recommendations. Through a shared decision-making process between the clinician and patient, patients also set specific, measurable, attainable, realistic, timely (SMART) goals for their health (Additional file ), providing the patient and clinician with a personalized plan geared toward enabling patients to achieve their CDPS goals. Patient survey The BETTER survey The BETTER patient health survey (Additional file ) provides primary care providers with a tool that captures the comprehensive patient information required to facilitate CDPS and monitor progress including the important behavioural, environmental and familial risk factors. This tool comprises validated instruments (Additional file ) that can gather detailed information relevant to CDPS including chronic conditions, previous cancer screening activities, lifestyle information and risks, perception of general health and depression, family history of certain medical conditions, food security and demographic information (age, gender, ethnicity, education, marital status, income). Much of this information is not routinely collected or available in the medical record; yet, this information is required for a clinician to determine the CDPS actions an individual patient should focus on. The BETTER trial survey was refined for use in BETTER 2 based partly on feedback received indicating that the tool could be streamlined and reformatted to improve data capture and usability. The original patient health survey was lengthy, consisting of 88 items , and included an assessment of physical activity using activity recall and a dietary assessment derived from the MEDFICTS, a dietary instrument with a focus primarily on fat intake . After review by the BETTER 2 CWG, it was determined that employment of other validated tools to assess diet and exercise, which had been developed for use in primary care, could improve the ability to identify those patients who would benefit from a brief intervention and track changes over time. Improvements to the original survey were made to better capture physical activity assessments including use of the general practice physical activity questionnaire (GPPAQ) to determine the patients’ level of activity in addition to the patients’ self-reported number of minutes spent on exercise weekly to determine if patients are achieving a CDPS target of ≥150 min per week of moderate exercise . The GPPAQ is a reliable and validated tool recommended for the assessment of physical activity in general practice that is supported by the United Kingdom National Institute for Health and Care Excellence (NICE) . This tool assesses the respondent’s level of physical activity both in and outside of work and can be used by clinicians to inform when an intervention to increase physical activity would be beneficial as well as to track a patient’s progress over time. A validated tool for dietary assessment and intervention in the clinical setting, Starting the Conversation , was added to provide the clinician with insight into patients’ eating behaviours and information on how patients could improve their diet (e.g. increase fruit and vegetables, decrease sweetened beverages, decrease unhealthy snacks). Alcohol consumption is now captured quantitatively to determine if patients are drinking within healthy alcohol consumption guidelines according to the National Institute on Alcohol Abuse and Alcoholism’s overview of alcohol consumption for low-risk drinking . Alcohol use disorders are also screened for using the validated abbreviated form of the alcohol use disorders identification test, the AUDIT-C . The updated tools provide clinicians with an approach that educate patients about healthy alcohol consumption as opposed to only screening for abuse. The health survey was reduced from 88 to 69 items to more efficiently capture information on the important modifiable lifestyle risk factors (smoking, alcohol consumption, diet/nutrition and physical activity) and includes assessments of the patient’s readiness to change, gathering the information required to address these risk factors . The health survey can take up to 30 min to complete. It may be completed before the patient’s prevention visit either independently by the patient or administered by a health care professional when deemed appropriate (e.g. literacy, language). Prevention visit form The prevention visit form (Additional file ) is a clinical tool that captures and structures the information obtained from the patient’s survey and medical chart required to identify which prevention activities each patient is (or is not) eligible to receive. Typically, the clinician will partially complete this form before the patient visit to identify the CDPS activities eligible for discussion. Further information is collected at the time of the prevention visit including a limited physical assessment of the patient to obtain weight, height, waist circumference and blood pressure. Before the patient visit, the clinician can enter the patient’s individual CDPS information on a blank version of the bubble diagram (Additional file ) and the first page of the prevention prescription (Additional file ). CDPS care maps The CDPS care map (Additional file ) provides primary care providers with an algorithm of the summarized CDPS recommendations for primary prevention in 40–65 year olds for patients with and without diabetes. A health care provider can use the care map as a decision-making tool during the prevention visit to determine what actions to take when certain conditions are met. This includes consideration as to what CDPS actions a patient is eligible or not eligible to receive, and when to refer a patient back to their primary care provider. For some actions, particularly cardiovascular related CDPS, recommendations depend upon whether a patient has diabetes or not. Use of the CDPS care map is facilitated by information gathered from the patient survey, the prevention visit form and during the prevention visit. Other tools such as the Framingham risk stratification and/or a family history risk assessment tool can also be used to provide further information about the patient’s risks of diabetes, cardiovascular disease or cancer. Bubble diagram The bubble diagram (Additional file ) provides a brief overview of the blended evidence-based CDPS activities for primary prevention in 40–65 year old male and female patients. Regular screening intervals and healthy targets summarized in this tool are meant as a companion piece to the CDPS care map, which depicts the appropriate care path for patients depending on their level of risk. Specific patient details can be entered on a blank version of the bubble diagram and then used as a teaching tool when meeting with patients. The bubble diagrams can facilitate a motivational interviewing approach through agenda setting . For example, after educating the patient about CDPS and while showing the patient their individualized bubble diagram, the clinician can ask the patient what they want to do to improve their health and begin the work that is finalized in the prevention prescription (described below). Intrinsic in the patient-centred approach is the ability of the patient to opt out of discussing any area that they do not wish to address. The bubble diagram allows the negotiation of a shared agenda for the prevention visit through a visual emphasis on the bubbles the patient wants to address. The bubble diagram with the evidence overview can also be used as a visual cue to remind primary care providers about the CDPS activities to consider when seeing patients in this age group. Prevention prescription with goals The prevention prescription (Additional file ) includes a summary of the patient’s CDPS status, target check-in intervals, referrals or actions to be completed, and any tools provided or linkages made to clinic or community resources to aid the patient in their CDPS efforts. The information on the prevention prescription that does not require shared decision-making can be partially completed before the visit and then finalized with input from the patient at the time of the visit. The goal sheet facilitates shared decision-making through the development of SMART goals including an assessment of confidence addressing action planning and self-efficacy in patient self-management . The prevention prescription with goals can be provided to the patient as a summary of their visit and serve as the patient’s personalized CDPS plan. The patient survey (Additional files and ) and prevention visit form (Additional file ) capture the patient information and characteristics needed to make CDPS recommendations. The CDPS care map (Additional file ), informed by the aforementioned data collection instruments, is a clinical decision aid that helps the clinician determine which CDPS recommendations the patient is eligible to receive when certain criteria a met, including when to refer the patient back to their primary care provider. The bubble diagrams (Additional file ) are also instructive to both the clinician and patient as to the CDPS activities a patient is eligible to receive and can be used to facilitate agenda setting with the patient. The prevention prescription (Additional file ) is a document intended for the patients to take with them when they leave the visit to help inform them of their prevention and screening status and guide patients on when, where and how they will go about improving deficient CDPS recommendations. Through a shared decision-making process between the clinician and patient, patients also set specific, measurable, attainable, realistic, timely (SMART) goals for their health (Additional file ), providing the patient and clinician with a personalized plan geared toward enabling patients to achieve their CDPS goals. The BETTER survey The BETTER patient health survey (Additional file ) provides primary care providers with a tool that captures the comprehensive patient information required to facilitate CDPS and monitor progress including the important behavioural, environmental and familial risk factors. This tool comprises validated instruments (Additional file ) that can gather detailed information relevant to CDPS including chronic conditions, previous cancer screening activities, lifestyle information and risks, perception of general health and depression, family history of certain medical conditions, food security and demographic information (age, gender, ethnicity, education, marital status, income). Much of this information is not routinely collected or available in the medical record; yet, this information is required for a clinician to determine the CDPS actions an individual patient should focus on. The BETTER trial survey was refined for use in BETTER 2 based partly on feedback received indicating that the tool could be streamlined and reformatted to improve data capture and usability. The original patient health survey was lengthy, consisting of 88 items , and included an assessment of physical activity using activity recall and a dietary assessment derived from the MEDFICTS, a dietary instrument with a focus primarily on fat intake . After review by the BETTER 2 CWG, it was determined that employment of other validated tools to assess diet and exercise, which had been developed for use in primary care, could improve the ability to identify those patients who would benefit from a brief intervention and track changes over time. Improvements to the original survey were made to better capture physical activity assessments including use of the general practice physical activity questionnaire (GPPAQ) to determine the patients’ level of activity in addition to the patients’ self-reported number of minutes spent on exercise weekly to determine if patients are achieving a CDPS target of ≥150 min per week of moderate exercise . The GPPAQ is a reliable and validated tool recommended for the assessment of physical activity in general practice that is supported by the United Kingdom National Institute for Health and Care Excellence (NICE) . This tool assesses the respondent’s level of physical activity both in and outside of work and can be used by clinicians to inform when an intervention to increase physical activity would be beneficial as well as to track a patient’s progress over time. A validated tool for dietary assessment and intervention in the clinical setting, Starting the Conversation , was added to provide the clinician with insight into patients’ eating behaviours and information on how patients could improve their diet (e.g. increase fruit and vegetables, decrease sweetened beverages, decrease unhealthy snacks). Alcohol consumption is now captured quantitatively to determine if patients are drinking within healthy alcohol consumption guidelines according to the National Institute on Alcohol Abuse and Alcoholism’s overview of alcohol consumption for low-risk drinking . Alcohol use disorders are also screened for using the validated abbreviated form of the alcohol use disorders identification test, the AUDIT-C . The updated tools provide clinicians with an approach that educate patients about healthy alcohol consumption as opposed to only screening for abuse. The health survey was reduced from 88 to 69 items to more efficiently capture information on the important modifiable lifestyle risk factors (smoking, alcohol consumption, diet/nutrition and physical activity) and includes assessments of the patient’s readiness to change, gathering the information required to address these risk factors . The health survey can take up to 30 min to complete. It may be completed before the patient’s prevention visit either independently by the patient or administered by a health care professional when deemed appropriate (e.g. literacy, language). The BETTER patient health survey (Additional file ) provides primary care providers with a tool that captures the comprehensive patient information required to facilitate CDPS and monitor progress including the important behavioural, environmental and familial risk factors. This tool comprises validated instruments (Additional file ) that can gather detailed information relevant to CDPS including chronic conditions, previous cancer screening activities, lifestyle information and risks, perception of general health and depression, family history of certain medical conditions, food security and demographic information (age, gender, ethnicity, education, marital status, income). Much of this information is not routinely collected or available in the medical record; yet, this information is required for a clinician to determine the CDPS actions an individual patient should focus on. The BETTER trial survey was refined for use in BETTER 2 based partly on feedback received indicating that the tool could be streamlined and reformatted to improve data capture and usability. The original patient health survey was lengthy, consisting of 88 items , and included an assessment of physical activity using activity recall and a dietary assessment derived from the MEDFICTS, a dietary instrument with a focus primarily on fat intake . After review by the BETTER 2 CWG, it was determined that employment of other validated tools to assess diet and exercise, which had been developed for use in primary care, could improve the ability to identify those patients who would benefit from a brief intervention and track changes over time. Improvements to the original survey were made to better capture physical activity assessments including use of the general practice physical activity questionnaire (GPPAQ) to determine the patients’ level of activity in addition to the patients’ self-reported number of minutes spent on exercise weekly to determine if patients are achieving a CDPS target of ≥150 min per week of moderate exercise . The GPPAQ is a reliable and validated tool recommended for the assessment of physical activity in general practice that is supported by the United Kingdom National Institute for Health and Care Excellence (NICE) . This tool assesses the respondent’s level of physical activity both in and outside of work and can be used by clinicians to inform when an intervention to increase physical activity would be beneficial as well as to track a patient’s progress over time. A validated tool for dietary assessment and intervention in the clinical setting, Starting the Conversation , was added to provide the clinician with insight into patients’ eating behaviours and information on how patients could improve their diet (e.g. increase fruit and vegetables, decrease sweetened beverages, decrease unhealthy snacks). Alcohol consumption is now captured quantitatively to determine if patients are drinking within healthy alcohol consumption guidelines according to the National Institute on Alcohol Abuse and Alcoholism’s overview of alcohol consumption for low-risk drinking . Alcohol use disorders are also screened for using the validated abbreviated form of the alcohol use disorders identification test, the AUDIT-C . The updated tools provide clinicians with an approach that educate patients about healthy alcohol consumption as opposed to only screening for abuse. The health survey was reduced from 88 to 69 items to more efficiently capture information on the important modifiable lifestyle risk factors (smoking, alcohol consumption, diet/nutrition and physical activity) and includes assessments of the patient’s readiness to change, gathering the information required to address these risk factors . The health survey can take up to 30 min to complete. It may be completed before the patient’s prevention visit either independently by the patient or administered by a health care professional when deemed appropriate (e.g. literacy, language). The prevention visit form (Additional file ) is a clinical tool that captures and structures the information obtained from the patient’s survey and medical chart required to identify which prevention activities each patient is (or is not) eligible to receive. Typically, the clinician will partially complete this form before the patient visit to identify the CDPS activities eligible for discussion. Further information is collected at the time of the prevention visit including a limited physical assessment of the patient to obtain weight, height, waist circumference and blood pressure. Before the patient visit, the clinician can enter the patient’s individual CDPS information on a blank version of the bubble diagram (Additional file ) and the first page of the prevention prescription (Additional file ). The CDPS care map (Additional file ) provides primary care providers with an algorithm of the summarized CDPS recommendations for primary prevention in 40–65 year olds for patients with and without diabetes. A health care provider can use the care map as a decision-making tool during the prevention visit to determine what actions to take when certain conditions are met. This includes consideration as to what CDPS actions a patient is eligible or not eligible to receive, and when to refer a patient back to their primary care provider. For some actions, particularly cardiovascular related CDPS, recommendations depend upon whether a patient has diabetes or not. Use of the CDPS care map is facilitated by information gathered from the patient survey, the prevention visit form and during the prevention visit. Other tools such as the Framingham risk stratification and/or a family history risk assessment tool can also be used to provide further information about the patient’s risks of diabetes, cardiovascular disease or cancer. The bubble diagram (Additional file ) provides a brief overview of the blended evidence-based CDPS activities for primary prevention in 40–65 year old male and female patients. Regular screening intervals and healthy targets summarized in this tool are meant as a companion piece to the CDPS care map, which depicts the appropriate care path for patients depending on their level of risk. Specific patient details can be entered on a blank version of the bubble diagram and then used as a teaching tool when meeting with patients. The bubble diagrams can facilitate a motivational interviewing approach through agenda setting . For example, after educating the patient about CDPS and while showing the patient their individualized bubble diagram, the clinician can ask the patient what they want to do to improve their health and begin the work that is finalized in the prevention prescription (described below). Intrinsic in the patient-centred approach is the ability of the patient to opt out of discussing any area that they do not wish to address. The bubble diagram allows the negotiation of a shared agenda for the prevention visit through a visual emphasis on the bubbles the patient wants to address. The bubble diagram with the evidence overview can also be used as a visual cue to remind primary care providers about the CDPS activities to consider when seeing patients in this age group. The prevention prescription (Additional file ) includes a summary of the patient’s CDPS status, target check-in intervals, referrals or actions to be completed, and any tools provided or linkages made to clinic or community resources to aid the patient in their CDPS efforts. The information on the prevention prescription that does not require shared decision-making can be partially completed before the visit and then finalized with input from the patient at the time of the visit. The goal sheet facilitates shared decision-making through the development of SMART goals including an assessment of confidence addressing action planning and self-efficacy in patient self-management . The prevention prescription with goals can be provided to the patient as a summary of their visit and serve as the patient’s personalized CDPS plan. The need for a BETTER approach to CDPS Busy clinicians lack adequate tools and resources to address CDPS in the primary care setting since many guidelines focus on specific conditions and lack precise recommendations that are clinically applicable . The BETTER tools bridge the knowledge to action gap through a blended approach of actionable items at every step in the process of CDPS from collecting the necessary patient information to care maps for primary care providers and teams . Through engaging the end-users in the process of developing CDPS tools and resources and applying the tools into the clinical setting, the BETTER trial was able to effectively implement CDPS in the family practice setting through a new skilled role of ‘prevention practitioner’ (PP) . These tools and resources further facilitate knowledge uptake by patients through agenda setting, shared-decision-making and self-management. The updated tools and resources described in this paper were refined in order to further facilitate patient assessments, education and shared decision-making aimed to identify and achieve the patient’s personalized CDPS goals as well as capture the information required to evaluate CDPS outcomes for the BETTER 2 program implementation in urban, rural, remote and aboriginal settings. Modifiable lifestyle factors such as smoking, unhealthy eating, physical inactivity and unhealthy alcohol consumption have a huge impact on chronic disease and there is a pressing need to address multiple behavioural risk factors in primary care . Although addressing modifiable lifestyle risk factors can significantly impact mortality and morbidity few individuals receive lifestyle counselling, even after a significant illness such as a cardiovascular event . The updated BETTER 2 tools provide clinicians with resources that evaluate the patient’s multiple lifestyle risks including their readiness to change and can be used to track changes over time. In addition, these tools facilitate shared decision-making with patients through agenda setting and identifying specific goals that encourage self-management . The tools are tailored to be adaptable and can be used in a number of ways as depicted in Fig. . In this way, multidisciplinary teams or family physicians can decide how best to apply the tools in their clinical settings. The updated BETTER approach may be used at the policy and practice level to target at risk populations and invite patients to receive an effective individualized CDPS intervention based on high-level evidence supported by the BETTER 2 tool kit. Moreover, the tools can be harmonized and integrated with existing public health initiatives in various settings. For example, an initiative involving population or practice level CDPS facilitation may consider the BETTER approach or integrating some of the tools to translate population level CDPS activities to individual at-risk patients. Policymakers may provide primary care providers with the BETTER tools to better implement CDPS into practice. Decision makers and primary care providers may adapt the tools to facilitate policy and practice integration of CDPS including consistent messages at all levels. This approach to knowledge integration is not without its limitations. The tools and resources developed focus on primary prevention of CDPS in patients aged 40–65; hence other high-level interventions such as immunizations, secondary prevention and chronic disease management are not included. The BETTER tools and resources were developed with knowledge integration considered at every step to bridge the research to practice gap through an implementation plan that engaged the end-users and applied the developed resources into the practice setting (Fig. ). Consequently, the final tools may not reflect the ‘best’ evidence but the best evidence that could be applied into the settings engaged. Also, over time, the various guidelines change, requiring constant updating and revisions to the tools. Furthermore, the process of knowledge integration is time consuming and requires organization, expertise and resources to conduct. The time and resources required to integrate knowledge into practice settings are not readily available in a health care system that is designed to focus on acute medicine and disease management. In addition to developing clinical practice guidelines, resources should also be used to develop and refine tools and processes, such as the BETTER 2 tools that allow guidelines to be more easily implemented into clinical practice. The BETTER 2 tools have been implemented and tested in the various practice settings and the outcomes will be presented in a future publication. The BETTER 2 tools can be downloaded for use from the BETTER website . Presently, the tools have been paper based which may limit the ability to implement the PP model into primary care in Canada due to the increased use of electronic medical records in primary care. Electronic versions of the patient survey and prevention prescription are currently being developed and will be tested in primary care settings in Alberta, Newfoundland and Labrador. Primary care teams should consider implementing the PP role to better address CDPS in conjunction with the primary care provider and thereby share the management of chronic disease prevention and screening. In Canada, prevention and early detection could reduce the burden of managing acute and chronic conditions. Despite that, there are barriers to CDPS. In many jurisdictions, the fee-for-service system does not remunerate prevention activities. Also, some settings lack the clinical resources to address the acute and chronic medical needs of the community. Hence there is limited capacity to implement the PP model in settings that do not compensate for this CDPS or that lack clinical resources to address and manage the acute and chronic conditions. The process could be implemented in other countries with heath care systems that have the resources to support this type of activity. Busy clinicians lack adequate tools and resources to address CDPS in the primary care setting since many guidelines focus on specific conditions and lack precise recommendations that are clinically applicable . The BETTER tools bridge the knowledge to action gap through a blended approach of actionable items at every step in the process of CDPS from collecting the necessary patient information to care maps for primary care providers and teams . Through engaging the end-users in the process of developing CDPS tools and resources and applying the tools into the clinical setting, the BETTER trial was able to effectively implement CDPS in the family practice setting through a new skilled role of ‘prevention practitioner’ (PP) . These tools and resources further facilitate knowledge uptake by patients through agenda setting, shared-decision-making and self-management. The updated tools and resources described in this paper were refined in order to further facilitate patient assessments, education and shared decision-making aimed to identify and achieve the patient’s personalized CDPS goals as well as capture the information required to evaluate CDPS outcomes for the BETTER 2 program implementation in urban, rural, remote and aboriginal settings. Modifiable lifestyle factors such as smoking, unhealthy eating, physical inactivity and unhealthy alcohol consumption have a huge impact on chronic disease and there is a pressing need to address multiple behavioural risk factors in primary care . Although addressing modifiable lifestyle risk factors can significantly impact mortality and morbidity few individuals receive lifestyle counselling, even after a significant illness such as a cardiovascular event . The updated BETTER 2 tools provide clinicians with resources that evaluate the patient’s multiple lifestyle risks including their readiness to change and can be used to track changes over time. In addition, these tools facilitate shared decision-making with patients through agenda setting and identifying specific goals that encourage self-management . The tools are tailored to be adaptable and can be used in a number of ways as depicted in Fig. . In this way, multidisciplinary teams or family physicians can decide how best to apply the tools in their clinical settings. The updated BETTER approach may be used at the policy and practice level to target at risk populations and invite patients to receive an effective individualized CDPS intervention based on high-level evidence supported by the BETTER 2 tool kit. Moreover, the tools can be harmonized and integrated with existing public health initiatives in various settings. For example, an initiative involving population or practice level CDPS facilitation may consider the BETTER approach or integrating some of the tools to translate population level CDPS activities to individual at-risk patients. Policymakers may provide primary care providers with the BETTER tools to better implement CDPS into practice. Decision makers and primary care providers may adapt the tools to facilitate policy and practice integration of CDPS including consistent messages at all levels. This approach to knowledge integration is not without its limitations. The tools and resources developed focus on primary prevention of CDPS in patients aged 40–65; hence other high-level interventions such as immunizations, secondary prevention and chronic disease management are not included. The BETTER tools and resources were developed with knowledge integration considered at every step to bridge the research to practice gap through an implementation plan that engaged the end-users and applied the developed resources into the practice setting (Fig. ). Consequently, the final tools may not reflect the ‘best’ evidence but the best evidence that could be applied into the settings engaged. Also, over time, the various guidelines change, requiring constant updating and revisions to the tools. Furthermore, the process of knowledge integration is time consuming and requires organization, expertise and resources to conduct. The time and resources required to integrate knowledge into practice settings are not readily available in a health care system that is designed to focus on acute medicine and disease management. In addition to developing clinical practice guidelines, resources should also be used to develop and refine tools and processes, such as the BETTER 2 tools that allow guidelines to be more easily implemented into clinical practice. The BETTER 2 tools have been implemented and tested in the various practice settings and the outcomes will be presented in a future publication. The BETTER 2 tools can be downloaded for use from the BETTER website . Presently, the tools have been paper based which may limit the ability to implement the PP model into primary care in Canada due to the increased use of electronic medical records in primary care. Electronic versions of the patient survey and prevention prescription are currently being developed and will be tested in primary care settings in Alberta, Newfoundland and Labrador. Primary care teams should consider implementing the PP role to better address CDPS in conjunction with the primary care provider and thereby share the management of chronic disease prevention and screening. In Canada, prevention and early detection could reduce the burden of managing acute and chronic conditions. Despite that, there are barriers to CDPS. In many jurisdictions, the fee-for-service system does not remunerate prevention activities. Also, some settings lack the clinical resources to address the acute and chronic medical needs of the community. Hence there is limited capacity to implement the PP model in settings that do not compensate for this CDPS or that lack clinical resources to address and manage the acute and chronic conditions. The process could be implemented in other countries with heath care systems that have the resources to support this type of activity. The BETTER tools are a first step to structure CDPS in primary care in a comprehensive, structured, personalized and evidence-based manner and to improve the application of knowledge into practice. The integrated clinical decision-making tools of BETTER 2 provide a resource for clinicians and policymakers that address patients’ complex care needs beyond a single disease approach and can be adapted to facilitate CDPS in various primary care clinical setting in urban, rural and remote communities.
Immunohistochemistry with 3 different clones in anaplastic lymphoma kinase fluorescence in situ hybridization positive non-small-cell lung cancer with thymidylate synthase expression analysis: a multicentre, retrospective, Italian study
783050d9-d8ed-43bf-8558-68c12c56923d
9624140
Anatomy[mh]
Lung cancer is the leading cause of cancer-related death worldwide and national data confirm this finding as it is responsible for 27% of cancer-related deaths in men, while it is at the third place in women. Non-small cell lung cancer (NSCLC) accounts for about 85% of all primary lung cancer, with two major histotypes, namely adenocarcinoma (ADC) and squamous cell carcinoma (SqCC) . The discovery of druggable genetic alterations in subsets of patients with NSCLC (particularly in adenocarcinoma) paved the way to the introduction of molecular targeted drugs in thoracic oncology, mainly tyrosine kinase inhibitors (TKIs) . Among others, ALK rearrangements are found in approximately 2-5% of advanced NSCLC patients , . ALK is a receptor tyrosine kinase of the insulin growth factor superfamily, initially identified in anaplastic large cell lymphoma patients. In lung cancer, the most frequent alteration is represented by a translocation of ALK with echinoderm microtubule-associated protein-like 4 (EML4) gene, resulting in an oncogenic fusion protein where ALK is constitutively active. Moreover, other ALK fusion partners have been described to date, such as nucleophosmin (NPM) and tropomysin (TPM) , . Crizotinib, a small molecule originally developed as a MET inhibitor and also active against ALK and ROS1 kinases, was approved in 2011 by the U.S. Food and Drug Administration (FDA) for the treatment of ALK-rearranged advanced lung tumours. Its approval also in treatment-naive ALK-positive patients was based on the evidence of its activity in heavily pre-treated ALK-positive NSCLC patients showing a significant increase in response rate (RR), as well as in progression free survival (PFS), as compared to first-line platinum-based chemotherapy. The identification of ALK rearrangements was initially based on the break-apart fluorescence in situ hybridization (FISH) test (Abbott Molecular, Abbott Park, IL) adopted as companion test in crizotinib registration studies. This assay uses two fluorescent probes to flank the highly conserved break point within ALK . A positive result, which means a value necessary for drug prescription, requires that at least 15% of 50 evaluated cells harbour split signals. The FISH test is characterized by a good sensitivity and specificity, but still remains operator-dependent, relatively expensive, time-consuming and it requires technical expertise, potentially limiting its wide adoption in pathology laboratories . A recent Italian observational study assessing real life molecular tests in 1787 advanced NSCLC patients, showed that ALK FISH evaluation was made in only 920 of 1345 ADC, possibly reflecting such technical issues . Among alternative methods, reverse transcription- polymerase chain reaction (RT-PCR) can detect ALK rearrangements with great specificity and sensibility. However, this method is time-costly and can suffer from the poor quality of RNA obtained from formalin-fixed paraffin-embedded tissues and the necessity of PCR multiplexing, because of the wide variation of fusion types , . More recently, next-generation sequencing (NGS) assays have been shown to be highly sensitive and able to identify even novel ALK variants and co-existing mutations , , . Immunohistochemistry (IHC) analysis of ALK fusion protein expression is a formidable screening tool in terms of turnaround time, costs and tumour tissue preservation , , . Recently, Mok et al. have demonstrated the excellent performance of IHC using Ventana D5F3 clone in ALK FISH uninformative patients suggesting its use alone as a standard testing method for ALK fusion . In 2015, the FDA approved the D5F3 clone (Ventana/Roche, Tucson, AZ) as alternative companion diagnostic test for Crizotinib treatment after the demonstration of 94% overall agreement when compared with FISH break-apart test. D5F3-Optiview System (Ventana) was validated as part of the European Harmonization Study where a binary assessment (i.e., positive or negative) was applied . This assay was highly sensitive (90%) and specific (95%) as well as accurate (93%) relative to the FISH results , . Another IHC assay, based on the 5A4 clone by Novocastra, was validated by the European Thoracic Platform (ETOP) and showed a sensitivity of 81.3% with a 99% specificity considering scores 2+ and 3+ as positive. The molecular testing guidelines for the selection of lung cancer patients for treatment with tyrosine kinase inhibitors (TKI), suggests either IHC or FISH for ALK rearrangements detection, without making specific recommendations on which specific IHC clone should be used . Novel ALK inhibitors have been developed using different assays, including IHC , . Finally, chemotherapy represents a valid option for the treatment of ALK-positive patients when developing resistant disease after TKIs, particularly when using a pemetrexed-based regimen . Pemetrexed is a potent antifolate agent currently registered for the treatment of advanced non-squamous NSCLC and malignant mesothelioma. High expression of thymidylate synthase (TS), an enzyme involved in DNA synthesis, confers resistance to pemetrexed, while low levels are associated with cell sensitivity . Shaw et al. first evaluated thymidylate synthase (TS) expression in ALK-rearranged NSCLCs reporting lower levels when compared to ALK-negative specimens . This Italian multicentre retrospective study is designed to compare 3 different ALK IHC clones (D5F3, 5A4, ALK1) in FISH positive NSCLC specimens, in order to verify their agreement. Moreover, we evaluated TS expression by real time-PCR in ALK rearranged cases compared to a FISH ALK-negative control cohort, to further characterise this molecularly distinct subgroup of patients. A description of the association between molecular data and patient’s outcome in terms of response to therapies (both ALK inhibitors and pemetrexed chemotherapy) was also performed. All the tumours included in this study were reviewed by expert pathologists at each institution and the histologic diagnosis was performed according to the 2021 World Health Organization (WHO) criteria . This study was approved by the ethics committee of each oncologic centre involved. Patients gave written informed consent before inclusion in the study. Each investigator sent the anonymised data to the San Luigi Hospital that had full access to the dataset as coordinator centre. FISH analysis was performed with a commercially available assay (Vysis LSI ALK dual colour, break-apart rearrangement probe, Abbott Molecular, Abbott Park, IL) according to the manufacturer’s recommendations. FISH test was done locally in accredited molecular pathology laboratories. At least 50 tumour cells in each sample were analysed and scored according to international guidelines , , . IHC staining was performed on 4-μm sections obtained from formalin-fixed and paraffin-embedded tissue blocks and then mounted on charged slides. After deparaffinization and rehydration, antigen retrieval was performed with Cell Condition Solution-1 (CC1) for 64 minutes at 95°C. ALK IHC assay was performed using 3 different clones: Novocastra mouse monoclonal antibody p80 ALK (clone 5A4, Leica Biosystems, Newcastle Upon Tyne, United Kingdom); Companion Diagnostic Kit Ventana anti-ALK rabbit monoclonal primary antibody (clone D5F3, Cell Signaling Technology); mouse monoclonal anti-human CD246 (clone ALK1, Dako/Agilent, Carpentaria, CA). All assays were performed using an automated immunostainer (ULTRA, Ventana Medical System, Tucson, AZ). The expression of all samples was scored by experienced pathologists (LR, PG, GR). Expression level with ALK1 and 5A4 was quoted using an IHC scoring considering a 4-tried intensity score (0, negative; 1+, weak; 2+, moderate; 3+ strong), while a dichotomous negative/positive system was adopted with the CDx D5F3 Ventana kit. In all batches, a negative (lack of the primary antibody) and positive (ganglions and nerves of the appendiceal tip and/or a sample from an ALK FISH-positive resected pulmonary adenocarcinoma) control was employed to evaluate the appropriateness of the IHC analysis. When using the 4-tried scoring system, cases showing no staining were considered as negative, cases with 2+/3+ staining were considered as positive/rearranged and cases with 1+ intensity expression were considered as indeterminate. Quantitative RT-PCR for TS and β-actin were performed in the present case cohort and in a FISH ALK-negative control group (173 ALK-negative and EGFR-negative advanced ADC including 127 men and 46 women) as previously described . Ten-μm thick sections were used for RNA extraction. The sections were serial to a 4-μm-thick section from the same formalin-fixed paraffin embedded tumour block used for H&E staining to select appropriate neoplastic areas. RNA isolation and retrotranscription were performed as already reported . Relative cDNA quantification was done using a fluorescence-based real-time detection method with measurements done in triplicate and the comparative Ct method used. Quantitative real-time polymerase chain reaction (qPCR) was performed with an ABI PRISM 7900HT Sequence Detection System (Life Technologies, Applied Biosystems Division, Carlsbad, CA, USA) in a 384-well plate. All qPCR mixtures contained 1 ul of cDNA template (approximately 20 ng of retrotranscribed total RNA) diluted in 9 ul of distilled-sterile water, 1200 nM of each primer, 200 nM of internal probe and TaqMan Gene Expression Master Mix (Life Technologies Thermo Fisher Scientific) to a final volume of 20 ul. Cycling conditions were 50°C for 2 minutes, 95°C for 10 minutes followed by 46 cycles at 95°C for 15 seconds and 60°C for 1 minute. Baseline and threshold for cycle threshold (Ct) calculation were set manually with ABI Prism SDS 2.1 Software. A mixture containing Human Total RNA (Stratagene, La Jolla, CA) was used as control calibrator on each plate. beta-Actin was used as internal reference gene. The fold change in gene expression levels, expressed in unitless values, was evaluated using the 2-ΔΔCt method. STATISTICAL ANALYSIS A descriptive statistical analysis was performed. Proportion agreement analysis was made using Cohen’s unweighted kappa (k). A k value from 0.61 to 0.8 was considered as substantial and between 0.81 and 1.0 as excellent, according to Landis and Koch. Comparison between TS expression in FISH-positive ALK rearranged specimens and a control cohort of ALK FISH-negative ones was performed with the Mann-Whitney and Kruskal-Wallis tests or the Spearman’s test. A descriptive statistical analysis was performed. Proportion agreement analysis was made using Cohen’s unweighted kappa (k). A k value from 0.61 to 0.8 was considered as substantial and between 0.81 and 1.0 as excellent, according to Landis and Koch. Comparison between TS expression in FISH-positive ALK rearranged specimens and a control cohort of ALK FISH-negative ones was performed with the Mann-Whitney and Kruskal-Wallis tests or the Spearman’s test. IHC was performed on 37 FISH-positive ALK locally advanced or metastatic NSCLC specimens from 7 different Italian Oncology Centres. Diagnoses were made between 2010 and 2015. Patients’ main characteristics are reported in . Median age at diagnosis was 60 years (range 22-81), 22 patients were male (62%) and all patients had adenocarcinoma histology. Data on smoking habits were available for 19 patients: 12 (63%) were current or former smokers, while 7 (37%) were neve-smokers. Seven were cell-blocks (19%), while the others were biopsies. 31 patients (84%) had stage IV disease at diagnosis, 3 (8%) stage IIIA, while initial staging was unknown for 3 (8%). IMMUNOHISTOCHEMISTRY The scoring distribution for each antibody is represented in and . Three cases (8.1%) showed strong (3+) and 13 (35%) mild (2+) staining intensity with ALK1 antibody, while 17 (45.9%) had weak intensity (1+) and 4 were considered to be negative. When using 5A4 antibody, 19 (51.3%) samples showed strong (3+) staining, 14 (37.9%) mild (2+) and 4 (10.8%) were considered to be negative (0+). None of the samples showed a weak intensity. Results with D5F3 were scored as positive or negative only, as previously indicated. 33 positive (89.1%) and 4 negative (10.9%) cases were found. When considering 2+ and 3+ scored samples as positive, 16/37 cases (43%) were positive with ALK1, as compared to 33/37 (89%) for 5A4. The concordance between each pair of antibodies is reported in , and . The proportions of agreement between ALK1 and 5A4 was 0.1691 (95% CI 0-0.4595), 0.1691 for ALK1 and D5F3 (95% CI 0-0.4595), and 1 for D5F3 and 5A4. When considering 3+ cases only as positive, three cases were positive using ALK1 as compared to 19 with 5A4. The concordance between each pair of antibodies using this cut-off is reported in . The proportion of agreement using this score was 0.1543 (95% CI 0-0.4665) between ALK1 and 5A4, 0.0212 (95% CI 0-0.1736) for ALK1 and D5F3, and 0.2269 (95% CI 0-0.5462) for 5A4 and D5F3. THYMIDYLATE SYNTHASE EXPRESSION ANALYSIS TS expression analysis was done in 36 samples. Only in one sample the analysis was not performed due to material exhaustion. Median TS expression of the whole ALK-positive case series was 6.07 (1.28-14.94). When compared with the ALK negative advanced population who had a median value of TS expression of 8.59 (range 1.04-27.3), our ALK positive cases had significant lower TS expression (p = 0.0053, ). RESPONSE TO THERAPY Clinical data on prescribed therapies were available for 30 patients (81%). Twenty-four of them received at least one ALK TKI, being crizotinib in 23 patients (96%) and ceritinib in 1 (4%). Among these patients, the best response with the first TKI administered was: partial response (PR) in 16 cases (67%), stable disease (SD) in 3 (12%) and a progressive disease (PD) in 5 (21%). Interestingly, 2 of these latter patients were IHC negative with all of the 3 tested clones, while the other showed IHC positivity (all 5A4 3+ and D5F3 positive with different ALK1 scores). Among those who obtained disease control (SD or PR), all but one was IHC positive with 5A4 and D5F3 clones. The only IHC negative patient (patient #36) had a partial response on crizotinib treatment. Data about pemetrexed-based chemotherapy were available for 14 patients. Among these patients, 10 obtained disease control (7 SD, 3 PR), while 4 had disease progression as best response to treatment. The median number of chemotherapy cycles was 4 (range 1 to 6). Nine patients obtaining disease control with pemetrexed-containing regimens were ALK IHC positive with D5F3 and 5A4 clones while only 5 were also positive with ALK1. One patient (number 36) who achieved SD with cisplatin-pemetrexed was ALK IHC negative; interestingly, the same patient derived benefit from crizotinib second-line treatment. The scoring distribution for each antibody is represented in and . Three cases (8.1%) showed strong (3+) and 13 (35%) mild (2+) staining intensity with ALK1 antibody, while 17 (45.9%) had weak intensity (1+) and 4 were considered to be negative. When using 5A4 antibody, 19 (51.3%) samples showed strong (3+) staining, 14 (37.9%) mild (2+) and 4 (10.8%) were considered to be negative (0+). None of the samples showed a weak intensity. Results with D5F3 were scored as positive or negative only, as previously indicated. 33 positive (89.1%) and 4 negative (10.9%) cases were found. When considering 2+ and 3+ scored samples as positive, 16/37 cases (43%) were positive with ALK1, as compared to 33/37 (89%) for 5A4. The concordance between each pair of antibodies is reported in , and . The proportions of agreement between ALK1 and 5A4 was 0.1691 (95% CI 0-0.4595), 0.1691 for ALK1 and D5F3 (95% CI 0-0.4595), and 1 for D5F3 and 5A4. When considering 3+ cases only as positive, three cases were positive using ALK1 as compared to 19 with 5A4. The concordance between each pair of antibodies using this cut-off is reported in . The proportion of agreement using this score was 0.1543 (95% CI 0-0.4665) between ALK1 and 5A4, 0.0212 (95% CI 0-0.1736) for ALK1 and D5F3, and 0.2269 (95% CI 0-0.5462) for 5A4 and D5F3. TS expression analysis was done in 36 samples. Only in one sample the analysis was not performed due to material exhaustion. Median TS expression of the whole ALK-positive case series was 6.07 (1.28-14.94). When compared with the ALK negative advanced population who had a median value of TS expression of 8.59 (range 1.04-27.3), our ALK positive cases had significant lower TS expression (p = 0.0053, ). Clinical data on prescribed therapies were available for 30 patients (81%). Twenty-four of them received at least one ALK TKI, being crizotinib in 23 patients (96%) and ceritinib in 1 (4%). Among these patients, the best response with the first TKI administered was: partial response (PR) in 16 cases (67%), stable disease (SD) in 3 (12%) and a progressive disease (PD) in 5 (21%). Interestingly, 2 of these latter patients were IHC negative with all of the 3 tested clones, while the other showed IHC positivity (all 5A4 3+ and D5F3 positive with different ALK1 scores). Among those who obtained disease control (SD or PR), all but one was IHC positive with 5A4 and D5F3 clones. The only IHC negative patient (patient #36) had a partial response on crizotinib treatment. Data about pemetrexed-based chemotherapy were available for 14 patients. Among these patients, 10 obtained disease control (7 SD, 3 PR), while 4 had disease progression as best response to treatment. The median number of chemotherapy cycles was 4 (range 1 to 6). Nine patients obtaining disease control with pemetrexed-containing regimens were ALK IHC positive with D5F3 and 5A4 clones while only 5 were also positive with ALK1. One patient (number 36) who achieved SD with cisplatin-pemetrexed was ALK IHC negative; interestingly, the same patient derived benefit from crizotinib second-line treatment. ALK rearrangements in NSCLC have been initially diagnosed using break apart FISH assay, a time-consuming and operator-dependent technique often unlikely applicable in small biopsies with artifacts or cell blocks , , , . Following the demonstration of high concordance with the break apart ALK FISH assay, IHC with D5F3 clone using Ventana platform and Optiview amplification system has been approved by the US FDA as an alternative diagnostic tool for ALK rearrangements detection and TKI treatment . In contrast, the European label of TKI requires only proof of an “advanced ALK-positive NSCLC” without any statement about a preferred assay, allowing each pathology laboratory to use either FISH, IHC or molecular biology assays such as RT-PCR or NGS alone or in combination , . The main advantages of IHC are its wide distribution among pathology laboratories, easy interpretation and lower cost . Moreover, IHC requires far less material than FISH, an important issue considering the prevalence of small biopsies and the vast number of tests currently needed for NSCLC diagnosis and correct management . For such reasons, many groups investigated different IHC clones with or without amplification systems and compared them with other assays. Of note, in the era of an NGS approach to NSCLC predictive biomarkers, ALK gene fusion is the only targetable oncogenic driver for which IHC may authorise the treatment with specific TKIs in case of positivity , , , , . As previously proposed , we considered positive those specimens scored as 2+ and 3+ with any clone. The agreement between 5A4 and D5F3 clones, when considering both 2+ and 3+ as positive, was excellent. On the contrary, as previously demonstrated , the agreement was poor when considering ALK1, a clone with very low sensitivity. A major limitation of this analysis is the absence of a FISH negative control, since differences in specificity could make the agreement between the antibodies even weaker. Being limited to FISH positive cases, our analysis is able to describe the sensitivity of the 3 different antibodies in classifying as eligible for treatment all the cases that would have been selected by FISH, without the possibility to describe their specificity performance in FISH negative cases. However, literature data suggest that all tested clones are characterised by high specificity (approaching 100%), but variable sensibility compared to the FISH assay , , , , supporting the current study design. We also repeated concordance agreement analysis considering only 3+ cases as positive. Using this scoring system, the agreement between ALK1 and 5A4 or ALK1 and D5F3 was poor, and it became poorer also between 5A4 and D5F3. In our series, 4 cases would have been considered negative (score 0) by all IHC clones despite being FISH positive. Interestingly, among the 3 patients that received crizotinib treatment, 2 experienced rapid progression and death within 1 month after the start of treatment, while the other patient had a partial response lasting for 11 months. Of note, both the early progressors had a borderline FISH positivity (16% and 18% of positive cells, respectively). Using different ALK detection methods, discordant cases have been increasingly reported. A recent review of the literature shows a response rate of 100% in IHC positive / FISH negative as compared to 46% in IHC negative / FISH positive cases . Such data underline the role of IHC analysis in the selection of patients for ALK inhibitors. For such reason, a recently published algorithm by Marchetti and colleagues suggests to screen all specimens with a high IHC sensitive assay (such as D5F3 clone with OptiView system or Novocastra 5A4), using FISH for doubtful cases such as those scored as 1+ or 2+ or negative cases with certain clinical or pathological characteristics often reported in ALK-rearranged lung adenocarcinomas (young age at diagnosis, light/no smoking habit, adenocarcinoma with signet ring cell features) . In other words, the use of 2 different ALK detection techniques when facing with a patient howing clinico-pathological characteristics suggestive of ALK rearrangement may significantly reduce the chance of missing an ALK-positive case receiving clinical benefit from specific inhibitors. Apart from new generation ALK inhibitors, chemotherapy still represents an option in daily clinical practice when ALK-positive patients experience disease progression. Of note, the role of pemetrexed in association with cisplatin has proven to be particularly active in lung ADC , particularly when harbouring ALK rearrangement due to the low levels of thymidylate synthase (TS), a key enzyme in folate metabolism. In light of this finding, we performed TS analysis in 36 of our specimens aimed at further confirming this observation and supporting the role of TS as possible predictive marker in ALK-positive patients. According to few similar investigations , , we observed that mean expression value is statistically lower as compared to a series of ALK-negative ADC. Although the low number of TS-positive and TS-negative cohorts partly limited the consistency of our results, we support the possible role of TS levels in the choice of pemetrexed-cisplatin chemotherapy in ALK-positive patients during disease progression after ALK-inhibitors. IHC is a reliable tool for the diagnosis of ALK-rearranged lung ADC. D5F3 and 5A4 clones have the higher percentage of agreement. We also confirmed previous studies that IHC could be used as a screening tool, solely authorising the adoption of ALK TKIs when detecting positivity. Moreover, since FISH positive / IHC negative cases rarely respond to TKI, IHC has a role to confirm a positive FISH result in patients with a borderline number of positive cells. Finally, we confirmed the role of TS expression in this setting, highlighting a significantly lower level in ALK rearranged patients, possibly explaining the higher sensitivity to pemetrexed-based chemotherapy and resulting a promising predictive marker of chemotherapy efficacy in disease progression during TKIs.
High-risk coronary plaque of sudden cardiac death victims: postmortem CT angiographic features and histopathologic findings
4429e431-e10f-45cd-9bcf-99cb6b912329
11306740
Pathology[mh]
Ischemic heart disease (IHD) resulting from atherosclerotic coronary artery disease (CAD) remains the most frequent cause of death in Western countries . Acute manifestations of CAD, acute coronary syndrome, relate primarily to acute thrombotic lumen occlusion initiated by disruption of a plaque by either fibrous cap ruptures or surface erosions. Pathological analysis of these acute coronary culprit plaques at autopsy has revealed several tissue characteristics of lesions that are particularly prone to disruptions, the so-called ‘high-risk plaques (HRP), formerly referred to as vulnerable plaques (VP). Most distinctive features are a thin-cap fibroatheroma (TCFA), large lipid atheroma, significant plaque inflammation, plaque hemorrhage, and occurrence of spotty calcifications . In clinical practice, coronary computed tomography angiography (CCTA) is an established and non-invasive imaging modality that can evaluate CAD’s presence, severity, and distribution. In the setting of acute chest pain, growing evidence supports the use of CCTA with high reported accuracy for obstructive disease, at lower cost, and with less radiation than nuclear imaging testing . It allows not only the detection of coronary artery stenosis but also quantitative analysis of stenosis rate or total occlusive disease and qualitative assessment of parameters of plaque morphology . Radiological features specific to high-risk plaques (HRP) include the presence of lowattenuation plaque, napkin-ring sign (NRS), spotty calcifications (SC), and positive remodeling . Also, the adventitia has received more attention, and it was suggested that neovascularization of the adventitial vasa vasorum (VV) and local perivascular inflammation plays a key role in the development and progression of atherosclerotic plaques . The outcome of a recent systemic review and meta-analysis on CCTA plaque characterizations and major adverse cardiovascular events (MACE) suggested that CCTA features of HRP can be a likely independent predictor of MACE, and proposed inclusion of CCTA evaluation of HRP in clinical practice . Sudden cardiac death (SCD) related to CAD is an extreme of MACE. In postmortem practice, coronary calcifications as a sign of atherosclerotic progression can be detected easily by postmortem CT (PMCT). However, the presence of non-calcified plaque, coronary stenosis assessment, and most HRP characteristics can be evaluated only after visualization of the lumen of the vessel with a circulating contrast agent. Additionally, invasive postmortem angiographic techniques such as multi-phase postmortem computed tomograpghy angiography (MPMCTA) enable the evaluation of coronary artery lumen, stenosis, suspected occlusions, and above-mentioned characteristics of HRP . Histopathological investigation of cross sections of coronary arteries is considered the “gold standard” postmortem method to investigate the heterogeneous composition of atherosclerotic plaques. The features of high-risk/vulnerable plaques (thin caps, large atheroma, inflammation) and the diverse parameters of so-called “acute plaques” (plaque rupture/erosion, mural thrombus, occluding thrombus, and thrombus age) are of relevance for the identification of culprit plaques in sudden deaths victims at autopsy . Currently, how and to what extent the radiological markers of HRP as mentioned above relate to the histopathological features of acute plaques is unknown. The goal of the study was to evaluate if clinically observed signs of so-called HRP can be observed in postmortem imaging at the level of the fatal plaque and to compare them to the histological findings. Case selection The study cohort consists of a series of autopsy cases of SCD victims due to acute coronary artery disease. Cases were collected from 2017 to 2020 and include all patients for whom both multi-phase postmortem CT angiography (MPMCTA) data and postmortem tissue blocks for further histopathological investigation of the coronary culprit lesion were available (see study flowchart in Fig. ) . A full autopsy was performed on all cases according to the international guidelines after an initial external examination and multi-phase postmortem CT angiography (MPMCTA) . Clinical data, including age, gender, type and duration of symptoms, resuscitation attempts, and medication, were recorded. Data retrieved from the autopsy report were body weight, BMI, heart weight, topographic location of coronary occlusions in the arterial tree, coronary dominance, and results of toxicological analyses and postmortem serum troponin levels, if available. Cases showing putrefaction, carbonization, traumatic lesions of the heart (not related to resuscitation attempts), and cases after percutaneous coronary revascularization procedures and/or coronary artery bypass grafting (type 4 and 5 of myocardial infarction) were excluded. Cases where concomitant pathology or toxicology results could explain the death were excluded. Histopathological examination of coronary arteries Archived segments of coronary arteries containing the culprit plaque (totally occluded or at least mural thrombosed lesion) during autopsy were collected for histological examination. For histopathological analysis of the occluded coronary artery, scanned hematoxylin and eosin (H&E) and trichrome-stained slides were reviewed. Two independent observers (with more than 10 years of experience in cardiovascular pathology) performed pathological evaluation. Consensus reading was obtained for the evaluation. Coronary artery stenosis was graded on a 3-point scale as less than 50%, 50–75%, and more than 75%. Calcifications of the plaque were evaluated considering their diameter and following the literature as without calcification, microcalcifications: 0.5–15 μm, punctuate/fragmented: 15 μm- 3 mm, sheet > 3 mm where both collagen matrix and necrotic core were calcified and nodular showing breaks in calcified plates with fragments of calcium separated by fibrin . Regarding the composition of the plaque, plaques were classified as fibrous, fibrolipid or calcified. Additionally, the lipid core size was graded semiquantitatively as less than 10% lipids, 10–50% lipids, or more than 50% lipids of total plaque area. Plaque complications were described as plaque rupture (disruption of a TCFA) with expulsion of the underlying necrotic core, clearly recognizable by the presence of cholesterol clefts), plaque erosion (thrombus adjacent to intact plaque surface with denuded endothelium), or a protruding calcified nodule (thrombi associated with eruptive, dense, calcific nodules) . Age of the thrombus was categorized as fresh, subacute/ (lytic), or organized/old (organized). Intraplaque inflammation was evaluated on a 2-point scale considering the percentage of area with inflammatory cells as none or with small foci (0–10%), moderate or severe (more than 10%). Adventitial inflammation was graded on a 3-point ordinal scale as follows: 1, normal (scarce isolated cells); 2, inflammatory foci occupying less than 50% of the circumference of the artery, and the inflammatory zone’s thickness remaining smaller than the media’s thickness; 3, inflammatory foci occupying more than 50% of the arterial circumference or inflammatory zone’s thickness exceeding the media’s thickness. Vasa vasorum extent was graded on a 3-point ordinal scale as follows: 1, normal; 2, increased, less than 50% of the arterial circumference; 3, markedly increased, more than 50% of the arterial circumference. Methods for the radiological evaluation For all cases, a postmortem CT angiography was performed on a 64-row multidetector CT system (CT LightSpeed VCT, GE Healthcare) according to the standard protocol of the MPMCTA , including a noncontrast acquisition covering the entire body (from head to toe) followed by three angiographic phases, arterial, venous and dynamic. The same parameters were applied for each angiographic phase: helical acquisition from vertex to pelvis at 120 kV, 200–400 mA modulation, noise index 35, pitch 0,984:1, detector coverage 40 mm, slice thickness 1,25 mm, interval 0.625 mm, tube rotation 0.8 s, SFOV: 50 cm, and a standard algorithm of reconstruction, window width 400 (WW) and window level 40 (WL). For the arterial phase, a volume of 1200 ml of contrast agent, an oil-based solution consisting of a mixture of paraffin oil with 6% of contrast agent (Angiofil®, Fumedica), with a flow rate of 800 ml/min was injected via femoral artery cannulation. A volume of 1800 ml of the same contrast mixture was injected for the venous phase at a flow rate of 800 ml/min via femoral vein cannulation. For the dynamic phase, an additional 500 ml of contrast mixture was injected via the arterial system. During this phase, the filling was synchronized with acquisition and adapted in function of time of acquisition (time of injection 150 s) with a flow rate of 200 ml/min. Coronary plaques in MPMCTA at the lesion level determined at autopsy were analyzed after curved multiplanar (CMPR) and curvilinear reconstructions (CVR) using an advanced postprocessing software (Advantage Workstation, GE Healthcare) on a specific heart standard algorithm of reconstruction with a window width of 400 and a window level of 40, slice thickness of 0,625 mm with an interval of 0,312 mm, on the arterial and dynamic phases. The venous phase was not analyzed, considering it would not add further information. The extent of coronary artery calcifications was evaluated using a semi-automated tool to calculate the coronary calcium score(CCS) on a specific unenhanced heart acquisition: cine rotation 0,9 s, detector coverage 20 mm, slice thickness 2,5 mm, acquisition of 8 images per 0,5 s, SFOV: 50 cm, 120 kV, 400 mA, DFOV: 25 cm, standard algorithm, no iterative reconstructions. The score was recorded globally and then separately for the involved vessel. Additionally, we assessed the following parameters: lumen diameter stenosis, plaque enhancement, and the previously mentioned HRP characteristics consisting of remodeling index, NRS, low attenuation plaques, and SC. NRS was defined on a cross-section of the coronary artery at the level of the culprit lesion as a thin ring-like hyperattenuating rim surrounding a low attenuating eccentric structure and was defined as present or absent. Low attenuation plaque component (< 30 HU) was defined as the mean CT number within three regions of interest (approximately 0.5-1.0 mm 2 ) randomly placed in the non-calcified portion of the plaque and was categorized as absent, <1 mm or ≥ 1 mm. Spotty calcification was defined as a small, dense (> 130HU) plaque component surrounded by noncalcified plaque tissue, ≤ 3 mm in curved multiplanar reformat and was classified as absent, less than 1 mm and between 1 and 3 mm. The degree of vessel stenosis was classified into three categories: <50%, between 50 and 75%, and > 75%. The remodeling index (RI) was calculated as the vessel cross-sectional area at the site of maximal stenosis divided by the average of proximal and distal reference segments’ cross-sectional areas . A RI threshold of more or equal to 1.1 was considered for the definition of positive remodeling . The measurement of minimal lumen area (MLA) was done on a curvilinear reconstruction in small axis of the vessel and calculated in mm 2 . By simultaneously displaying noncontrast, arterial, and dynamic phases, the observers could determine plaque enhancement, which was considered positive when plaque exhibited visually higher attenuation in dynamic phase compared to native and arterial phases. Image analysis was performed by two radiologists one with > 10 years of experience and 5 years of forensic radiology practice and a fellowship-trained cardiovascular radiologist with 8 years of experience. If consensus was not obtained, a senior cardiovascular radiologist with over 20 years of experience helped resolve the case. Statistical analyses Statistical analyses were performed using STATA 16 software ( StataCorp. 2019. Stata Statistical Software: Release 16. College Station, TX: StataCorp LLC ). Descriptive statistics of the study population’s characteristics and their radiological and histopathological data were reported as mean(sd) for continuous variables and as number(percent) for the categorical variables. Correlations, which seemed pertinent considering the mechanism of coronary artery disease and literature data, were tested between radiological and histopathological findings using Fisher’s exact test to check the hypothesis that the rows and columns in a two-way table are independent. The association between CAC score of the concerned vessel and the degree of histological calcification detected was performed by the Kruskall-Wallis equality-of-population rank test. However, the strength of the association between the global and culprit vessel CAC score was assessed using a robust regression and Spearman’s rho coefficient. The agreement between stenosis of the lumen at histology and during radiological examination was assessed using the kappa-statistic measure of interrater agreement. The study cohort consists of a series of autopsy cases of SCD victims due to acute coronary artery disease. Cases were collected from 2017 to 2020 and include all patients for whom both multi-phase postmortem CT angiography (MPMCTA) data and postmortem tissue blocks for further histopathological investigation of the coronary culprit lesion were available (see study flowchart in Fig. ) . A full autopsy was performed on all cases according to the international guidelines after an initial external examination and multi-phase postmortem CT angiography (MPMCTA) . Clinical data, including age, gender, type and duration of symptoms, resuscitation attempts, and medication, were recorded. Data retrieved from the autopsy report were body weight, BMI, heart weight, topographic location of coronary occlusions in the arterial tree, coronary dominance, and results of toxicological analyses and postmortem serum troponin levels, if available. Cases showing putrefaction, carbonization, traumatic lesions of the heart (not related to resuscitation attempts), and cases after percutaneous coronary revascularization procedures and/or coronary artery bypass grafting (type 4 and 5 of myocardial infarction) were excluded. Cases where concomitant pathology or toxicology results could explain the death were excluded. Archived segments of coronary arteries containing the culprit plaque (totally occluded or at least mural thrombosed lesion) during autopsy were collected for histological examination. For histopathological analysis of the occluded coronary artery, scanned hematoxylin and eosin (H&E) and trichrome-stained slides were reviewed. Two independent observers (with more than 10 years of experience in cardiovascular pathology) performed pathological evaluation. Consensus reading was obtained for the evaluation. Coronary artery stenosis was graded on a 3-point scale as less than 50%, 50–75%, and more than 75%. Calcifications of the plaque were evaluated considering their diameter and following the literature as without calcification, microcalcifications: 0.5–15 μm, punctuate/fragmented: 15 μm- 3 mm, sheet > 3 mm where both collagen matrix and necrotic core were calcified and nodular showing breaks in calcified plates with fragments of calcium separated by fibrin . Regarding the composition of the plaque, plaques were classified as fibrous, fibrolipid or calcified. Additionally, the lipid core size was graded semiquantitatively as less than 10% lipids, 10–50% lipids, or more than 50% lipids of total plaque area. Plaque complications were described as plaque rupture (disruption of a TCFA) with expulsion of the underlying necrotic core, clearly recognizable by the presence of cholesterol clefts), plaque erosion (thrombus adjacent to intact plaque surface with denuded endothelium), or a protruding calcified nodule (thrombi associated with eruptive, dense, calcific nodules) . Age of the thrombus was categorized as fresh, subacute/ (lytic), or organized/old (organized). Intraplaque inflammation was evaluated on a 2-point scale considering the percentage of area with inflammatory cells as none or with small foci (0–10%), moderate or severe (more than 10%). Adventitial inflammation was graded on a 3-point ordinal scale as follows: 1, normal (scarce isolated cells); 2, inflammatory foci occupying less than 50% of the circumference of the artery, and the inflammatory zone’s thickness remaining smaller than the media’s thickness; 3, inflammatory foci occupying more than 50% of the arterial circumference or inflammatory zone’s thickness exceeding the media’s thickness. Vasa vasorum extent was graded on a 3-point ordinal scale as follows: 1, normal; 2, increased, less than 50% of the arterial circumference; 3, markedly increased, more than 50% of the arterial circumference. For all cases, a postmortem CT angiography was performed on a 64-row multidetector CT system (CT LightSpeed VCT, GE Healthcare) according to the standard protocol of the MPMCTA , including a noncontrast acquisition covering the entire body (from head to toe) followed by three angiographic phases, arterial, venous and dynamic. The same parameters were applied for each angiographic phase: helical acquisition from vertex to pelvis at 120 kV, 200–400 mA modulation, noise index 35, pitch 0,984:1, detector coverage 40 mm, slice thickness 1,25 mm, interval 0.625 mm, tube rotation 0.8 s, SFOV: 50 cm, and a standard algorithm of reconstruction, window width 400 (WW) and window level 40 (WL). For the arterial phase, a volume of 1200 ml of contrast agent, an oil-based solution consisting of a mixture of paraffin oil with 6% of contrast agent (Angiofil®, Fumedica), with a flow rate of 800 ml/min was injected via femoral artery cannulation. A volume of 1800 ml of the same contrast mixture was injected for the venous phase at a flow rate of 800 ml/min via femoral vein cannulation. For the dynamic phase, an additional 500 ml of contrast mixture was injected via the arterial system. During this phase, the filling was synchronized with acquisition and adapted in function of time of acquisition (time of injection 150 s) with a flow rate of 200 ml/min. Coronary plaques in MPMCTA at the lesion level determined at autopsy were analyzed after curved multiplanar (CMPR) and curvilinear reconstructions (CVR) using an advanced postprocessing software (Advantage Workstation, GE Healthcare) on a specific heart standard algorithm of reconstruction with a window width of 400 and a window level of 40, slice thickness of 0,625 mm with an interval of 0,312 mm, on the arterial and dynamic phases. The venous phase was not analyzed, considering it would not add further information. The extent of coronary artery calcifications was evaluated using a semi-automated tool to calculate the coronary calcium score(CCS) on a specific unenhanced heart acquisition: cine rotation 0,9 s, detector coverage 20 mm, slice thickness 2,5 mm, acquisition of 8 images per 0,5 s, SFOV: 50 cm, 120 kV, 400 mA, DFOV: 25 cm, standard algorithm, no iterative reconstructions. The score was recorded globally and then separately for the involved vessel. Additionally, we assessed the following parameters: lumen diameter stenosis, plaque enhancement, and the previously mentioned HRP characteristics consisting of remodeling index, NRS, low attenuation plaques, and SC. NRS was defined on a cross-section of the coronary artery at the level of the culprit lesion as a thin ring-like hyperattenuating rim surrounding a low attenuating eccentric structure and was defined as present or absent. Low attenuation plaque component (< 30 HU) was defined as the mean CT number within three regions of interest (approximately 0.5-1.0 mm 2 ) randomly placed in the non-calcified portion of the plaque and was categorized as absent, <1 mm or ≥ 1 mm. Spotty calcification was defined as a small, dense (> 130HU) plaque component surrounded by noncalcified plaque tissue, ≤ 3 mm in curved multiplanar reformat and was classified as absent, less than 1 mm and between 1 and 3 mm. The degree of vessel stenosis was classified into three categories: <50%, between 50 and 75%, and > 75%. The remodeling index (RI) was calculated as the vessel cross-sectional area at the site of maximal stenosis divided by the average of proximal and distal reference segments’ cross-sectional areas . A RI threshold of more or equal to 1.1 was considered for the definition of positive remodeling . The measurement of minimal lumen area (MLA) was done on a curvilinear reconstruction in small axis of the vessel and calculated in mm 2 . By simultaneously displaying noncontrast, arterial, and dynamic phases, the observers could determine plaque enhancement, which was considered positive when plaque exhibited visually higher attenuation in dynamic phase compared to native and arterial phases. Image analysis was performed by two radiologists one with > 10 years of experience and 5 years of forensic radiology practice and a fellowship-trained cardiovascular radiologist with 8 years of experience. If consensus was not obtained, a senior cardiovascular radiologist with over 20 years of experience helped resolve the case. Statistical analyses were performed using STATA 16 software ( StataCorp. 2019. Stata Statistical Software: Release 16. College Station, TX: StataCorp LLC ). Descriptive statistics of the study population’s characteristics and their radiological and histopathological data were reported as mean(sd) for continuous variables and as number(percent) for the categorical variables. Correlations, which seemed pertinent considering the mechanism of coronary artery disease and literature data, were tested between radiological and histopathological findings using Fisher’s exact test to check the hypothesis that the rows and columns in a two-way table are independent. The association between CAC score of the concerned vessel and the degree of histological calcification detected was performed by the Kruskall-Wallis equality-of-population rank test. However, the strength of the association between the global and culprit vessel CAC score was assessed using a robust regression and Spearman’s rho coefficient. The agreement between stenosis of the lumen at histology and during radiological examination was assessed using the kappa-statistic measure of interrater agreement. Study population characteristics, autopsy and radiological findings Out of the total number of 1109 autopsies performed during the study period, 50 cases fulfilled all inclusion criteria. After histological examination, 6 cases for which coronary thrombosis /culprit lesion was not confirmed (technical problems including bad decalcification) were excluded, and 4 cases after radiological examination with non-interpretable MPMCTA (one had layering artifacts, two had no identified culprit lesion on CT and one an excessive image noise due to obesity). Cases with only some minor non-evaluable features of the plaques on autopsy or MPMCTA, were not included. Finally, after excluding 10 above-mentioned cases for major histological or radiological artefacts, 40 cases were included (28 men and 12 women). The flow diagram for study cases is shown in Fig. . The characteristics of the study population, postmortem serum levels of troponins, and results of histological and radiological assessments are summarized in Tables , and . Correlations between radiological and histopathological findings Coronary artery calcium (CAC) score There was a very strong correlation between global CAC score and the vessel with the culprit lesion. The calculated β-Coefficient using a robust regression model was β = 0.52 (p-value < 0.0001), and the Spearman’s rho correlation rho = 0.91 (p-value < 0.0001) (Fig. ). The association between the CAC score of the concerned vessel and the degree of histologically detected calcification was performed by the Kruskall-Wallis equality-of-population rank test. A statistically significant association was observed ( p = 0.033). Higher values of CAC score were observed for the histological groups 3 and 4 (respectively calcium sheet/ nodule) (Fig. ). Spotty calcifications (SC) A significant correlation (p-value 0.002) was observed between the presence of SC detected at radiological examination (Fig. ) and the presence of punctuate/fragmented calcification at histology. There was no significant correlation between the presence of SC on MPMCTA for the type of plaque, type of thrombosis, and composition of the plaque. Low attenuation plaque (LAP) There was a non-significant difference for the presence of LAP on radiological examination and histological type of plaque, composition of the plaque, age of the thrombosis, and degree of calcifications, inflammation, and vasa vasorum. It should be noted that all cases with a lipid core consisting of over 50% of lipids histologically (considered as vulnerable plaques ) were visualized in more than 50% of PMCTA. Napkin-ring sign (NRS) The NRS was observed in 40% of cases. There were significant correlations of the radiological presence of NRS (Fig. ) for fibrolipidic composition of the plaque (p-value 0.007), severe intraplaque inflammation (p-value 0.017), severe adventitial inflammation (p-value 0.021) and an increased VV (p-value 0.012). No significant correlation was observed for the degree of calcification. Remodeling index (RI) A positive RI (≥ 1.1) was observed in 75% of cases. There was a significant difference (p-value 0.005) in RI for CTO versus an acute plaque complication (eroded and ruptured plaques). Regarding the composition of the plaque, RI was < 1 for fibrotic plaques and the highest values were observed for calcified plaques with a significant difference between groups (p-value 0.064) (Figs. and ). There was no significant correlation for plaque or adventitial inflammation and RI. Enhancement of the plaque Enhancement of the plaque was noticed in 58.3% of cases (Table ). Significant correlations for the presence of the enhancement of the plaque (Fig. ) were observed for plaques with fibrolipidic composition (p-value 0.003) and a severe intraplaque inflammation (p-value 0.011). Enhancement of the plaque was not observed in fibrous and calcified plaques. There were no significant correlations between enhancement and other histological parameters. Stenosis There was a poor agreement between stenosis of the lumen at histology at the plaque level and at radiological evaluation, this for all cases (58.97%, Kappa − 0.0833; expected agreement 62.13%) and for cases with stenosis over 75% (58.97%, Kappa-0.1387, expected agreement 53.97%). Out of the total number of 1109 autopsies performed during the study period, 50 cases fulfilled all inclusion criteria. After histological examination, 6 cases for which coronary thrombosis /culprit lesion was not confirmed (technical problems including bad decalcification) were excluded, and 4 cases after radiological examination with non-interpretable MPMCTA (one had layering artifacts, two had no identified culprit lesion on CT and one an excessive image noise due to obesity). Cases with only some minor non-evaluable features of the plaques on autopsy or MPMCTA, were not included. Finally, after excluding 10 above-mentioned cases for major histological or radiological artefacts, 40 cases were included (28 men and 12 women). The flow diagram for study cases is shown in Fig. . The characteristics of the study population, postmortem serum levels of troponins, and results of histological and radiological assessments are summarized in Tables , and . Coronary artery calcium (CAC) score There was a very strong correlation between global CAC score and the vessel with the culprit lesion. The calculated β-Coefficient using a robust regression model was β = 0.52 (p-value < 0.0001), and the Spearman’s rho correlation rho = 0.91 (p-value < 0.0001) (Fig. ). The association between the CAC score of the concerned vessel and the degree of histologically detected calcification was performed by the Kruskall-Wallis equality-of-population rank test. A statistically significant association was observed ( p = 0.033). Higher values of CAC score were observed for the histological groups 3 and 4 (respectively calcium sheet/ nodule) (Fig. ). Spotty calcifications (SC) A significant correlation (p-value 0.002) was observed between the presence of SC detected at radiological examination (Fig. ) and the presence of punctuate/fragmented calcification at histology. There was no significant correlation between the presence of SC on MPMCTA for the type of plaque, type of thrombosis, and composition of the plaque. Low attenuation plaque (LAP) There was a non-significant difference for the presence of LAP on radiological examination and histological type of plaque, composition of the plaque, age of the thrombosis, and degree of calcifications, inflammation, and vasa vasorum. It should be noted that all cases with a lipid core consisting of over 50% of lipids histologically (considered as vulnerable plaques ) were visualized in more than 50% of PMCTA. Napkin-ring sign (NRS) The NRS was observed in 40% of cases. There were significant correlations of the radiological presence of NRS (Fig. ) for fibrolipidic composition of the plaque (p-value 0.007), severe intraplaque inflammation (p-value 0.017), severe adventitial inflammation (p-value 0.021) and an increased VV (p-value 0.012). No significant correlation was observed for the degree of calcification. Remodeling index (RI) A positive RI (≥ 1.1) was observed in 75% of cases. There was a significant difference (p-value 0.005) in RI for CTO versus an acute plaque complication (eroded and ruptured plaques). Regarding the composition of the plaque, RI was < 1 for fibrotic plaques and the highest values were observed for calcified plaques with a significant difference between groups (p-value 0.064) (Figs. and ). There was no significant correlation for plaque or adventitial inflammation and RI. There was a very strong correlation between global CAC score and the vessel with the culprit lesion. The calculated β-Coefficient using a robust regression model was β = 0.52 (p-value < 0.0001), and the Spearman’s rho correlation rho = 0.91 (p-value < 0.0001) (Fig. ). The association between the CAC score of the concerned vessel and the degree of histologically detected calcification was performed by the Kruskall-Wallis equality-of-population rank test. A statistically significant association was observed ( p = 0.033). Higher values of CAC score were observed for the histological groups 3 and 4 (respectively calcium sheet/ nodule) (Fig. ). A significant correlation (p-value 0.002) was observed between the presence of SC detected at radiological examination (Fig. ) and the presence of punctuate/fragmented calcification at histology. There was no significant correlation between the presence of SC on MPMCTA for the type of plaque, type of thrombosis, and composition of the plaque. There was a non-significant difference for the presence of LAP on radiological examination and histological type of plaque, composition of the plaque, age of the thrombosis, and degree of calcifications, inflammation, and vasa vasorum. It should be noted that all cases with a lipid core consisting of over 50% of lipids histologically (considered as vulnerable plaques ) were visualized in more than 50% of PMCTA. The NRS was observed in 40% of cases. There were significant correlations of the radiological presence of NRS (Fig. ) for fibrolipidic composition of the plaque (p-value 0.007), severe intraplaque inflammation (p-value 0.017), severe adventitial inflammation (p-value 0.021) and an increased VV (p-value 0.012). No significant correlation was observed for the degree of calcification. A positive RI (≥ 1.1) was observed in 75% of cases. There was a significant difference (p-value 0.005) in RI for CTO versus an acute plaque complication (eroded and ruptured plaques). Regarding the composition of the plaque, RI was < 1 for fibrotic plaques and the highest values were observed for calcified plaques with a significant difference between groups (p-value 0.064) (Figs. and ). There was no significant correlation for plaque or adventitial inflammation and RI. Enhancement of the plaque was noticed in 58.3% of cases (Table ). Significant correlations for the presence of the enhancement of the plaque (Fig. ) were observed for plaques with fibrolipidic composition (p-value 0.003) and a severe intraplaque inflammation (p-value 0.011). Enhancement of the plaque was not observed in fibrous and calcified plaques. There were no significant correlations between enhancement and other histological parameters. There was a poor agreement between stenosis of the lumen at histology at the plaque level and at radiological evaluation, this for all cases (58.97%, Kappa − 0.0833; expected agreement 62.13%) and for cases with stenosis over 75% (58.97%, Kappa-0.1387, expected agreement 53.97%). In clinical radiology, various non-invasive cardiovascular imaging modalities have been applied to investigate the presence of coronary artery disease or its consequences by detecting coronary artery calcification, stenosis, or myocardial infarction, respectively. However, the main efforts of modern non-invasive imaging modalities are directed towards identifying asymptomatic non-obstructive vulnerable plaques by detecting signs of vulnerability as the presence of low-attenuation plaque, napkin-ring sign, spotty calcifications, and positive remodeling . In this study, we demonstrated that these signs can be detected at the level of fatal coronary occlusion in cases of SCD. We observed these signs in our series: SC in 49% of cases, LAP in 46% of cases, and NRS in 40%. In 58% of cases, we observed a similar type of plaque enhancement as was recognized previously in carotid arteries as a marker of plaque vulnerability closely related to ischemic stroke . We recently reported that the latter correlates with histopathological signs of plaque inflammation, potentially serving as an additional imaging marker of plaque vulnerability . Our findings are in line with case reports of suspected sudden coronary death, showing how CCTA can be employed to detect high-risk plaque features using histopathology as a gold standard . In clinical practice, a RI threshold ≥ 1.1 visualized by CCTA was suggested to define positive remodeling . In our series, we observed RI values over 1.1 in 75% of cases, and the mean RI was 1.39 ± 0.71. We also observed that the RI values were lower in cases with fibrotic plaques and higher for calcified plaques, which is compatible with postmortem and clinical studies . The evaluation of coronary stenosis is considered a piece of critical autopsy information for the interpretation of CAD, as stenosis over 75% could be considered as a potential substrate for SCD in the absence of other causes . We observed a poor agreement between stenosis of the lumen at histology at the plaque level and radiological evaluation. In the previous postmortem studies, Morgan et al. reported that there was an agreement between PMCTA and histological examination of the culprit lesion at autopsy, representing a critical stenosis of > 75%, with a sensitivity of 85.7% and specificity of 91.5% . However, discrepancies between autopsy and PMCTA were reported when histology was assessed on a segmental basis, especially in regions of densely calcified vessels considered pathologically critical stenosis. Singh et al. investigated the sensitivity and specificity of PMCTA versus histopathology at autopsy in diagnosing coronary artery stenosis over 70% and observed a sensitivity of 61.5% and specificity of 91.7%. The authors reported a restrictive value of PMCTA in cases with pericardial hematoma and in stented coronary arteries, which were excluded from the study . This indicates that stenosis of coronary arteries at imaging should be interpreted with caution. Coronary artery calcifications are readily observed on PMCT and are considered a marker of atherosclerosis. However, it is well known that even zero or low CAC score on native PMCT cannot exclude the presence of myocardial infarction related to ACAD. This paradoxical discrepancy between imaging and autopsy findings can be explained by considering the pathophysiology of atherosclerosis and coronary thrombosis resulting in SCD, especially in young patients . In our series, the median coronary artery calcium score was 314, corresponding to the “moderately to severely increased risk” group in clinical practice. However, there was probably some bias in case selection to perform PMCTA as cases without calcium in native PMCT had no systematic postmortem angiography. In conclusion, clinically observed signs of so-called HRP can be observed in postmortem imaging at the level of the fatal plaque in the majority of cases. Moreover, an enhancement of the plaque could be considered as a new marker of inflammation, which could have some clinical role in risk evaluation. These signs could be considered as additional signs of vulnerability of the coronary plaque while interpreting PMCTA and could be hopefully useful to develop the prediction model to differentiate or diagnose sudden coronary cardiac death at forensic autopsies. More studies are needed to confirm and further our knowledge of these signs in postmortem radiological practice. The study’s major limitation is its retrospective and forensic nature. Some of the collected coronary arteries were excluded as they presented technical problems mostly related to the absence or insufficient decalcification.
TDP‐43 association with Subiculum and CA1 Hippocampal Subfield Atrophy in Primary Age‐Related Tauopathy
b558aa25-7e4a-49dc-bd3a-b7fe65169201
11713620
Forensic Medicine[mh]
The life strategy of bacteria rather than fungi shifts in karst tiankeng island-like systems
7aca623e-c14b-4a86-ab90-8b47ca37d734
11653732
Microbiology[mh]
Karst tiankengs are a large-scale surface negative terrain habitat island-like system newly discovered at the beginning of the 21st century and known as “the most spectacular karst landscape on the earth” . The karst tiankeng is characterized by its large volume, surrounded by vertical cliffs, and connected to an underground river at the bottom . The interior of the karst tiankeng maintains an independent pristine habitat and is a natural complex of geology, climate, soil, animal and plant, and microorganisms . Karst tiankengs vary in area and are isolated by vertical cliffs, making them a typical habitat island-like system . Our previous studies have confirmed that tiankengs are important “reservoirs for biodiversity conservation” and “species refuges” . The relationship between the species diversity and the island (or island-like) area, known as the species-area relationship (SAR), is among the most general laws in ecology . The SAR patterns contribute to understanding how biodiversity is lost as a result of habitat loss . However, beyond diversity, microbial functional traits in biogeography remain largely unknown. Soil microbes have great complexity in terrestrial ecosystems, and microbial ecologists have proposed a classification of soil microbial functional traits based on microbial life strategies and growth rates . Trait-based microbiota life strategies support crucial soil functions by regulating soil structure and biogeochemical cycle . In recent years, a growing body of work involving microbial life strategy analysis has been applied in many environments and provided a key dimension for describing community functions beyond structure and diversity . For instance, the rocky desertification succession , karst vegetation restoration , and grassland restoration alter soil microbial life strategies. Microbial trait-based life strategies are key indicators of community functioning and represent interrelated traits due to evolutionary and physiological trade-offs based on environmental conditions . To fully understand the soil microbial ecology in karst tiankeng, it is necessary to systematically consider trait-based life strategies. This knowledge can significantly enrich our comprehensive understanding of soil microbial ecology within karst tiankeng ecosystems. Previous theoretical and empirical studies have demonstrated three main candidate mechanisms that contribute to positive SAR: sampling effect, area per se effect, and habitat heterogeneity effect . Generally, the K -strategists are dominate disturbed in oligotrophic environments and are characterized by slow growth rates , while r -strategists are mainly disturbed in labile nutrient fractions and are characterized by rapid growth rates . Following this logic, the larger island harboring high-quality habitats than smaller island would facilitate the shift from K - to r - strategists. Smaller islands are usually characterized by stronger edge effects, which harbor lower soil moisture or nutrient habitats . In addition, previous studies indicated that island remoteness favors AM fungal communities’ life history characteristics . However, there is a significant difference in the isolation characteristics of karst tiankengs from ocean islands. Thus, whether habitat loss can lead to a shift in microbial life strategies is an important and unanswered question. Bacterial and fungal communities are the main components of the soil microbiota and exhibit distinctly different life strategies and morphological characteristics , leading to disparate sensitivities to spatial changes. Previous studies indicated that soil bacteria and fungi exhibit similar biogeographic patterns but different mechanisms . In addition, soil microbiota are composed of a small number of abundant taxa and a large number of rare taxa . Heterogeneity in substrate preference and adaptation to environmental stresses is an important reason for the differences in soil microbial abundance . Abundant taxa occupy a wide niche width and are more resilient to environmental challenges. Previous studies primarily aimed at the entire microbial community life strategies and ignored the differences in life strategies of abundant and rare taxa. In this study, we investigated the spatial scaling of bacterial and fungal composition, structure, and functional traits, using microbial data (bacteria and fungi) collected from 26 karst tiankengs in two typical karst tiankeng groups in China (Dashiwei and Zhanyi). We aimed to answer three key questions: (i) Whether abundant and rare taxa have different response patterns to the unique habitats of karst tiankengs? (2) How do the life strategies of soil microbes shift with karst tiankeng area and isolation? (3) What are the key factors influencing the life strategies of soil microbes? Study sites and sampling This study was conducted at the two typical karst tiankeng groups, including dashiwei tiankeng group (DSW) in Guangxi Province (24°30′–25°03′N, 106°10′–106°51′E) and zhanyi tiankeng group (ZY) in Yunnan Province (25°35′–25°57′N, 103°29′–103°39′E). These two karst tiankeng groups are distributed in the biodiversity hotspot areas of China and have preserved a systematic and complete tiankeng evolution chain. The dense distribution and variety of tiankengs are ideal areas for island biogeographic research. The climate type of DSW is mid-subtropical monsoon, with an average annual temperature of 16.6°C and average annual precipitation of 1140 mm. The main vegetation types were evergreen broad-leaved forest and evergreen deciduous broad-leaved mixed forest. The soil type is mainly neutral or alkaline limestone soils. The climate type of ZY is subtropical plateau monsoon, with an average annual temperature of 14.5°C and average annual precipitation of 1008 mm. The main vegetation types were evergreen broad-leaved forests and coniferous forests. The soil type is mainly red soil. A total of 12 (DSW) and 14 (ZY) karst tiankengs were selected as our study sites (Fig. S1). We calculated the area of karst tiankeng using ArcGIS 10.7 (ESRI, Inc., Redlands, CA, USA). Isolation was measured as the karst tiankeng depth and measured using real-time kinematic. The morphological data of the 26 karst tiankengs are presented in Table S1. Sampling was carried out in September 2022. On each karst tiankeng, we established two to six 10 ×  10  m 2 plots. Three 1 ×  1  m 2 quadrats were randomly set in each plot, and soil samples were collected from the 0–15-cm depth using the five-point sampling method. The three quadrat soil samples were mixed to form one composite soil sample, and a total of 93 soil samples were obtained. The fresh soil samples were transported in ice boxes and stored in the laboratory for further analysis. The soil samples were divided into three parts, one part was placed at −80°C for DNA extraction, one part was used for soil moisture content determination, and the remaining portions of soil were naturally air-dried for soil physicochemical property analysis. High-throughput sequencing Soil microbial DNA was extracted using the CTAB methods. The extracted DNA concentration and purity were determined using the 1% agarose gel electrophoresis. For bacteria, PCR amplification of the V4-V5 region was performed using primer pair 515F (5′-GTGCCAGCMGCCGCGGTAA-3′)/907R(5′- CCGTCAATTCCTTTGAGTTT -3′). For fungi, PCR amplification of the ITS1-1F region was performed using primer pair ITS1-1F-F (5′- CTTGGTCATTTAGAGGAAGTAA -3′)/ITS1-1F-R (5′- GCTGCGTTCTTCATCGATGC -3′) . Using the Bio-Rad T100 gradient PCR instrument, the PCR reaction was performed with Phusion Master Mix (2×), forward primers (0.2 µM/µL), reverse primers (0.2 µM/µL), gDNA (1 ng/µL), and sterile water. The PCR amplification procedure was as follows: pre-denaturing at 98°C for 1 min; the 30 cycles included 98°C, 10 sec; 50°C, 30 sec; 72°C, 30 sec; and 72°C, 5 min. The PCR product was detected by agarose gel electrophoresis at a concentration of 2%, and after mixing the samples at an equal concentration according to its concentration, the PCR product was purified by agarose gel electrophoresis at a concentration of 1 × TAE and 2% and cut glue recovery target strips. PCR product purification was performed using the Qiagen gel recovery kit (Qiagen, Germany). Libraries were constructed using the TruSeq DNA PCR-free sample preparation kit (Illumina, USA), qualifying the libraries were quantitated and tested by Qubit 4.0 fluorometer (Invitrogen, Thermo Fisher Scientific, OR, USA). The qualified libraries were sequenced on the NovaSeq 6000 PE250 (Illumina, San Diego, CA, USA). The raw data were quality-controlled and merged using the QIIME2 programs (v2021.2). The DADA2 plug-in in the Qiime2 software was applied to the filter, denoising, merge processes, and clustered into amplicon sequence variant (ASV) . The QIIME2 feature-classifier was used for taxonomic annotations of the bacterial and fungal species and aligned to the GREENGENES2 (v2022.10) and UNIT (v8.2) databases by references . Soil properties Soil water content was determined by soil sample weight stabilization after drying at 105°C. Soil pH was measured in a suspension of soil and deionized water at a ratio of 1:2.5. Soil organic matter content was assessed using the modified Walkley–Black procedure involving potassium dichromate oxidation. Dissolved organic carbon in the soil was analyzed by adding distilled water (at a ratio of 5:1) to 3 g of soil, followed by centrifugation after agitation (250 r/min, for 1 hour), and detection using a TOC analyzer. The total nitrogen content in the soil was determined using the semimicro Kjeldahl method (Kjeltec 2200 Auto Distillation Unit, FOSS, Hillerød, Sweden). Forest hydrolytic nitrogen (LY/T1229-1999) assay was employed to measure available nitrogen in the soil. Total and available phosphorus levels were measured via colorimetric analysis using an ultraviolet-visible spectrophotometer (UV-2550, Shimadzu, Kyoto, Japan). Calcium and magnesium concentrations were extracted from Mehlich-III solution and quantified using inductively coupled plasma emission spectrometry (Optima 2100 DV, Perkin-Elmer, Waltham, MA, USA). Statistical analyses Abundant and rare taxa of soil microbes were defined following the previous study . Briefly, ASVs with average relative abundance of >0.05% across all samples were regarded as “abundant”; ASVs with average relative abundance of <0.01% across all samples were regarded as “rare.” The rrn operon copy number estimation is employed using the rrnDB database to determine whether the bacterial community adopts K -strategies or r -strategies. A lower and higher rrn operon copy number indicates K -strategies and r -strategies, respectively . Fungal life strategies were classified by phylum level and ecological guild level. FUNGuild database was utilized to estimate the ecological guild of each fungal ASVs , with Ectomycorrhizal and Saprophytic fungi classified as K - and r -strategists, respectively . In addition, oligotrophic bacterial taxa ( K -strategists) include Acidobacteria , Actinobacteria , Planctomycetes , and Chloroflexi . Eutrophic bacterial taxa ( r -strategists) include Bacteroidetes , Gemmatimonadetes , and Firmicutes . Basidiomycota and Ascomycota were designated as oligotrophic fungi taxa ( K -strategists) and eutrophic fungi taxa ( r -strategists), respectively . The bacterial and fungal communities alpha-diversity analysis was conducted in R. Principal coordinate analysis (PCoA) was used to visualize bacterial and fungal communities within the DSW and ZY karst tiankeng groups. When constructing networks, we set the strong Spearman’s correlation (|r| > 0.7 for bacteria, |r| > 0.5 for fungi) , and visualization was conducted using Gephi software (0.10.1; Gephi, WebAtlas, France). Linear regression was employed to evaluate the relationship between area and isolation on the microbial life strategies on each karst tiankeng. To improve model fit, the data were log-transformed. The structural equation modeling (SEM) was used to elucidate the causal pathways through which karst tiankeng island biogeographic factors and soil physicochemical properties influence the life strategies of soil microbes. Structural equation modeling was conducted via the lavaan package in R. This study was conducted at the two typical karst tiankeng groups, including dashiwei tiankeng group (DSW) in Guangxi Province (24°30′–25°03′N, 106°10′–106°51′E) and zhanyi tiankeng group (ZY) in Yunnan Province (25°35′–25°57′N, 103°29′–103°39′E). These two karst tiankeng groups are distributed in the biodiversity hotspot areas of China and have preserved a systematic and complete tiankeng evolution chain. The dense distribution and variety of tiankengs are ideal areas for island biogeographic research. The climate type of DSW is mid-subtropical monsoon, with an average annual temperature of 16.6°C and average annual precipitation of 1140 mm. The main vegetation types were evergreen broad-leaved forest and evergreen deciduous broad-leaved mixed forest. The soil type is mainly neutral or alkaline limestone soils. The climate type of ZY is subtropical plateau monsoon, with an average annual temperature of 14.5°C and average annual precipitation of 1008 mm. The main vegetation types were evergreen broad-leaved forests and coniferous forests. The soil type is mainly red soil. A total of 12 (DSW) and 14 (ZY) karst tiankengs were selected as our study sites (Fig. S1). We calculated the area of karst tiankeng using ArcGIS 10.7 (ESRI, Inc., Redlands, CA, USA). Isolation was measured as the karst tiankeng depth and measured using real-time kinematic. The morphological data of the 26 karst tiankengs are presented in Table S1. Sampling was carried out in September 2022. On each karst tiankeng, we established two to six 10 ×  10  m 2 plots. Three 1 ×  1  m 2 quadrats were randomly set in each plot, and soil samples were collected from the 0–15-cm depth using the five-point sampling method. The three quadrat soil samples were mixed to form one composite soil sample, and a total of 93 soil samples were obtained. The fresh soil samples were transported in ice boxes and stored in the laboratory for further analysis. The soil samples were divided into three parts, one part was placed at −80°C for DNA extraction, one part was used for soil moisture content determination, and the remaining portions of soil were naturally air-dried for soil physicochemical property analysis. Soil microbial DNA was extracted using the CTAB methods. The extracted DNA concentration and purity were determined using the 1% agarose gel electrophoresis. For bacteria, PCR amplification of the V4-V5 region was performed using primer pair 515F (5′-GTGCCAGCMGCCGCGGTAA-3′)/907R(5′- CCGTCAATTCCTTTGAGTTT -3′). For fungi, PCR amplification of the ITS1-1F region was performed using primer pair ITS1-1F-F (5′- CTTGGTCATTTAGAGGAAGTAA -3′)/ITS1-1F-R (5′- GCTGCGTTCTTCATCGATGC -3′) . Using the Bio-Rad T100 gradient PCR instrument, the PCR reaction was performed with Phusion Master Mix (2×), forward primers (0.2 µM/µL), reverse primers (0.2 µM/µL), gDNA (1 ng/µL), and sterile water. The PCR amplification procedure was as follows: pre-denaturing at 98°C for 1 min; the 30 cycles included 98°C, 10 sec; 50°C, 30 sec; 72°C, 30 sec; and 72°C, 5 min. The PCR product was detected by agarose gel electrophoresis at a concentration of 2%, and after mixing the samples at an equal concentration according to its concentration, the PCR product was purified by agarose gel electrophoresis at a concentration of 1 × TAE and 2% and cut glue recovery target strips. PCR product purification was performed using the Qiagen gel recovery kit (Qiagen, Germany). Libraries were constructed using the TruSeq DNA PCR-free sample preparation kit (Illumina, USA), qualifying the libraries were quantitated and tested by Qubit 4.0 fluorometer (Invitrogen, Thermo Fisher Scientific, OR, USA). The qualified libraries were sequenced on the NovaSeq 6000 PE250 (Illumina, San Diego, CA, USA). The raw data were quality-controlled and merged using the QIIME2 programs (v2021.2). The DADA2 plug-in in the Qiime2 software was applied to the filter, denoising, merge processes, and clustered into amplicon sequence variant (ASV) . The QIIME2 feature-classifier was used for taxonomic annotations of the bacterial and fungal species and aligned to the GREENGENES2 (v2022.10) and UNIT (v8.2) databases by references . Soil water content was determined by soil sample weight stabilization after drying at 105°C. Soil pH was measured in a suspension of soil and deionized water at a ratio of 1:2.5. Soil organic matter content was assessed using the modified Walkley–Black procedure involving potassium dichromate oxidation. Dissolved organic carbon in the soil was analyzed by adding distilled water (at a ratio of 5:1) to 3 g of soil, followed by centrifugation after agitation (250 r/min, for 1 hour), and detection using a TOC analyzer. The total nitrogen content in the soil was determined using the semimicro Kjeldahl method (Kjeltec 2200 Auto Distillation Unit, FOSS, Hillerød, Sweden). Forest hydrolytic nitrogen (LY/T1229-1999) assay was employed to measure available nitrogen in the soil. Total and available phosphorus levels were measured via colorimetric analysis using an ultraviolet-visible spectrophotometer (UV-2550, Shimadzu, Kyoto, Japan). Calcium and magnesium concentrations were extracted from Mehlich-III solution and quantified using inductively coupled plasma emission spectrometry (Optima 2100 DV, Perkin-Elmer, Waltham, MA, USA). Abundant and rare taxa of soil microbes were defined following the previous study . Briefly, ASVs with average relative abundance of >0.05% across all samples were regarded as “abundant”; ASVs with average relative abundance of <0.01% across all samples were regarded as “rare.” The rrn operon copy number estimation is employed using the rrnDB database to determine whether the bacterial community adopts K -strategies or r -strategies. A lower and higher rrn operon copy number indicates K -strategies and r -strategies, respectively . Fungal life strategies were classified by phylum level and ecological guild level. FUNGuild database was utilized to estimate the ecological guild of each fungal ASVs , with Ectomycorrhizal and Saprophytic fungi classified as K - and r -strategists, respectively . In addition, oligotrophic bacterial taxa ( K -strategists) include Acidobacteria , Actinobacteria , Planctomycetes , and Chloroflexi . Eutrophic bacterial taxa ( r -strategists) include Bacteroidetes , Gemmatimonadetes , and Firmicutes . Basidiomycota and Ascomycota were designated as oligotrophic fungi taxa ( K -strategists) and eutrophic fungi taxa ( r -strategists), respectively . The bacterial and fungal communities alpha-diversity analysis was conducted in R. Principal coordinate analysis (PCoA) was used to visualize bacterial and fungal communities within the DSW and ZY karst tiankeng groups. When constructing networks, we set the strong Spearman’s correlation (|r| > 0.7 for bacteria, |r| > 0.5 for fungi) , and visualization was conducted using Gephi software (0.10.1; Gephi, WebAtlas, France). Linear regression was employed to evaluate the relationship between area and isolation on the microbial life strategies on each karst tiankeng. To improve model fit, the data were log-transformed. The structural equation modeling (SEM) was used to elucidate the causal pathways through which karst tiankeng island biogeographic factors and soil physicochemical properties influence the life strategies of soil microbes. Structural equation modeling was conducted via the lavaan package in R. Composition of soil bacterial and fungal communities in the karst tiankeng Regarding the bacterial communities, a total of 7,634,262 high-quality reads were clustered into 643,509 ASVs. There were 219 and 30,132 ASVs belonging to abundant and rare taxa, respectively. For the fungal communities, a total of 6,444,618 high-quality reads were clustered into 68,200 ASVs. Among these fungal ASVs, 341 and 13,011 ASVs were defined as abundant and rare taxa, respectively. The PCoA and Veen results revealed that bacterial and fungal community composition (total, abundant, and rare) in karst tiankeng exhibited differences between individual karst tiankeng and between two karst tiankeng groups (Fig. S2 to S4). The shared bacterial ASVs of the total, abundant, and rare taxa in the DSW and ZY occupy a higher proportion, while the unique fungal ASVs of the total and rare taxa to each tiankeng were higher than the shared ASVs. At the phylum level, Proteobacteria , Acidobacteriota , and Actinobacteriota were the most important components of the soil bacterial communities . Ascomycota , Basidiomycota , and Mortierellomycota were the most abundant fungal taxa across all soil samples. The effect of karst tiankeng area and isolation on the soil microbe life strategy The linear regression exhibited that total, abundant, and rare taxa of bacterial rrn operon copy number increased significantly with karst tiankeng area . The increase in rrn operon copy number indicated a shift in the bacterial community from K - to r -strategists. Soil fungi, however, showed markedly different life strategist patterns. Karst tiankeng area did not affect fungal life strategy. The above relationships were robust for different taxa (total, abundant, and rare taxa) estimators. The bacterial rrn operon copy number exhibited positive relationships with karst tiankeng isolation (Fig. S5). The increase of karst tiankeng depth promoted the shift of the bacterial community from K - to r -strategists. Karst tiankeng isolation did not affect fungal life strategists. Coexistence patterns of bacterial and fungal communities in the karst tiankeng Co-occurrence networks of bacterial communities in the karst tiankeng soils consisted of 737 nodes and 2914 edges in DSW, which is higher than in ZY (nodes = 566, edges = 896, ). Regarding DSW and ZY bacterial networks, the proportions of rare nodes (30.26% and 30.74%) were higher than abundant nodes (17.49% and 18.18%). Co-occurrence networks of fungal communities consisted of 88 nodes and 259 edges in DSW and 60 nodes and 109 edges in ZY . Conversely, the proportions of fungal abundant nodes were higher than rare nodes. To determine the topological roles of karst tiankeng soil microbial, the Zi-Pi plot result is shown in Fig. S6. Twelve and six taxa were detected as “keystones” in bacterial and fungal networks, respectively. These bacterial taxa belong to Proteobacteria , Acidobacteriota ( K -strategists), Actinobacteriota ( K -strategists), and Gemmatimonadota ( r -strategists), and fungal taxa belong to Ascomycota ( r -strategists) and Mortierellomycota ( r -strategists). All keystone taxa belong to abundant taxa among the bacterial and fungal networks (Table S2). Linking bacterial and fungal communities to potential functions The potential functions of the abundant and rare bacteria showed that the abundant taxa involved in higher activity at major metabolic functions , such as chemoheterotrophy, aerobic chemoheterotrophy, nitrogen fixation, aerobic ammonia oxidation, and nitrification. In comparison, rare taxa contain more diversity of potential functions (Fig. S7). The fungal potential function results showed that fungal abundant taxa were mainly involved in endophyte, plant pathogen, and soil saprotroph. The fungal rare taxa contain more diversity of potential functions, and a large proportion (>30%) were unassigned. Furthermore, we explored the relationship between the C, N, and P cycles and bacterial abundance and rare taxa . Network analysis exhibited that abundant bacteria were highly associated with C, N, and P cycles. The 33 functional genes in DSW were highly associated with 57 bacterial abundant taxa, while only 6 functional genes in ZY were highly associated with 8 bacterial abundant taxa. The nodes shared by the two networks belong to the phyla of Firmicutes , Actinobacteriota , and Acidobacteriota . The functional genes shared by the two networks belong to the C cycle (fermentation to acetate and fermentation to ethanol), N cycle (assimilatory nitrate reduction), and P cycle (cytochrome aa3-600 menaquinol oxidase) (Table S3). Responses of soil microbial community composition, function, and life strategies to environmental variables Our study found differences in soil physicochemical properties between different karst tiankeng (Table S4). The total phosphorus and soil water content of DSW and the dissolved organic carbon and soil water content of ZY increased significantly with the karst tiankeng area (Table S5). Furthermore, we tested the effect of environmental factors on the soil microbial community. The Mantel test results exhibited responses to environmental variables of taxa and function in DSW were more significant than those in ZY . The response of abundant and rare taxa to environmental variables was inconsistent for both bacterial and fungal communities. For both bacterial and fungal communities, pH was notably associated with abundant, rare, and functional genes. Four environmental variables (AP, SOM, Ca, and Mg) were also associated with rare taxa of bacterial and fungal communities. More environmental variables were strongly related to the rare taxa than the abundant taxa. Abundant bacterial life strategies were significantly associated with SWC in DSW and ZY (Table S6), while rare bacterial life strategies and abundant and rare fungal life strategies exhibited no obvious association with SWC. The SEM was applied to reveal the direct and indirect pathways that influence the life strategies of soil bacterial communities . The SEM results revealed that the area did not have a direct effect on bacterial life strategies. In DSW, the great rrn operon copy number of the total and abundant bacterial taxa on larger karst tiankeng was driven by higher TP on those karst tiankeng . In ZY, the great rrn operon copy number of total bacterial taxa on larger karst tiankeng was driven by higher Ca content , while the great rrn operon copy number of abundant bacterial taxa on larger karst tiankeng was mainly driven by the increased Ca and SWC; DOC did not affect the bacterial life strategies . Regarding the bacterial communities, a total of 7,634,262 high-quality reads were clustered into 643,509 ASVs. There were 219 and 30,132 ASVs belonging to abundant and rare taxa, respectively. For the fungal communities, a total of 6,444,618 high-quality reads were clustered into 68,200 ASVs. Among these fungal ASVs, 341 and 13,011 ASVs were defined as abundant and rare taxa, respectively. The PCoA and Veen results revealed that bacterial and fungal community composition (total, abundant, and rare) in karst tiankeng exhibited differences between individual karst tiankeng and between two karst tiankeng groups (Fig. S2 to S4). The shared bacterial ASVs of the total, abundant, and rare taxa in the DSW and ZY occupy a higher proportion, while the unique fungal ASVs of the total and rare taxa to each tiankeng were higher than the shared ASVs. At the phylum level, Proteobacteria , Acidobacteriota , and Actinobacteriota were the most important components of the soil bacterial communities . Ascomycota , Basidiomycota , and Mortierellomycota were the most abundant fungal taxa across all soil samples. The linear regression exhibited that total, abundant, and rare taxa of bacterial rrn operon copy number increased significantly with karst tiankeng area . The increase in rrn operon copy number indicated a shift in the bacterial community from K - to r -strategists. Soil fungi, however, showed markedly different life strategist patterns. Karst tiankeng area did not affect fungal life strategy. The above relationships were robust for different taxa (total, abundant, and rare taxa) estimators. The bacterial rrn operon copy number exhibited positive relationships with karst tiankeng isolation (Fig. S5). The increase of karst tiankeng depth promoted the shift of the bacterial community from K - to r -strategists. Karst tiankeng isolation did not affect fungal life strategists. Co-occurrence networks of bacterial communities in the karst tiankeng soils consisted of 737 nodes and 2914 edges in DSW, which is higher than in ZY (nodes = 566, edges = 896, ). Regarding DSW and ZY bacterial networks, the proportions of rare nodes (30.26% and 30.74%) were higher than abundant nodes (17.49% and 18.18%). Co-occurrence networks of fungal communities consisted of 88 nodes and 259 edges in DSW and 60 nodes and 109 edges in ZY . Conversely, the proportions of fungal abundant nodes were higher than rare nodes. To determine the topological roles of karst tiankeng soil microbial, the Zi-Pi plot result is shown in Fig. S6. Twelve and six taxa were detected as “keystones” in bacterial and fungal networks, respectively. These bacterial taxa belong to Proteobacteria , Acidobacteriota ( K -strategists), Actinobacteriota ( K -strategists), and Gemmatimonadota ( r -strategists), and fungal taxa belong to Ascomycota ( r -strategists) and Mortierellomycota ( r -strategists). All keystone taxa belong to abundant taxa among the bacterial and fungal networks (Table S2). The potential functions of the abundant and rare bacteria showed that the abundant taxa involved in higher activity at major metabolic functions , such as chemoheterotrophy, aerobic chemoheterotrophy, nitrogen fixation, aerobic ammonia oxidation, and nitrification. In comparison, rare taxa contain more diversity of potential functions (Fig. S7). The fungal potential function results showed that fungal abundant taxa were mainly involved in endophyte, plant pathogen, and soil saprotroph. The fungal rare taxa contain more diversity of potential functions, and a large proportion (>30%) were unassigned. Furthermore, we explored the relationship between the C, N, and P cycles and bacterial abundance and rare taxa . Network analysis exhibited that abundant bacteria were highly associated with C, N, and P cycles. The 33 functional genes in DSW were highly associated with 57 bacterial abundant taxa, while only 6 functional genes in ZY were highly associated with 8 bacterial abundant taxa. The nodes shared by the two networks belong to the phyla of Firmicutes , Actinobacteriota , and Acidobacteriota . The functional genes shared by the two networks belong to the C cycle (fermentation to acetate and fermentation to ethanol), N cycle (assimilatory nitrate reduction), and P cycle (cytochrome aa3-600 menaquinol oxidase) (Table S3). Our study found differences in soil physicochemical properties between different karst tiankeng (Table S4). The total phosphorus and soil water content of DSW and the dissolved organic carbon and soil water content of ZY increased significantly with the karst tiankeng area (Table S5). Furthermore, we tested the effect of environmental factors on the soil microbial community. The Mantel test results exhibited responses to environmental variables of taxa and function in DSW were more significant than those in ZY . The response of abundant and rare taxa to environmental variables was inconsistent for both bacterial and fungal communities. For both bacterial and fungal communities, pH was notably associated with abundant, rare, and functional genes. Four environmental variables (AP, SOM, Ca, and Mg) were also associated with rare taxa of bacterial and fungal communities. More environmental variables were strongly related to the rare taxa than the abundant taxa. Abundant bacterial life strategies were significantly associated with SWC in DSW and ZY (Table S6), while rare bacterial life strategies and abundant and rare fungal life strategies exhibited no obvious association with SWC. The SEM was applied to reveal the direct and indirect pathways that influence the life strategies of soil bacterial communities . The SEM results revealed that the area did not have a direct effect on bacterial life strategies. In DSW, the great rrn operon copy number of the total and abundant bacterial taxa on larger karst tiankeng was driven by higher TP on those karst tiankeng . In ZY, the great rrn operon copy number of total bacterial taxa on larger karst tiankeng was driven by higher Ca content , while the great rrn operon copy number of abundant bacterial taxa on larger karst tiankeng was mainly driven by the increased Ca and SWC; DOC did not affect the bacterial life strategies . Community compositions and structure of abundant and rare taxa in the karst tiankeng In our study, the bacterial communities in the karst tiankeng were dominated by Proteobacteria and Acidobacteriota . Proteobacteria participate in ecological and phylogenetic values and play a crucial role in energy metabolism . Acidobacteriota species are considered to perform the function of organic matter decomposition and nutrient cycles . In addition, the fungal abundant phyla, including Ascomycota and Basidiomycota , may play a key role in soil aggregation and nutrient uptake . This result suggested that microbes involved in the nutrient cycle and energy metabolism might survive well in karst tiankeng. However, no significant differences were observed between the two karst tiankeng groups. This phenomenon may illustrate that these soil microbes were well adapted to the habitat conditions of karst tiankeng. The bacteria and fungal community structures exhibited differences between individual karst tiankeng and between two karst tiankeng groups (Fig. S2). Regarding the bacterial and fungal abundant taxa, there was no significant change among the different individual karst tiankeng, suggesting that bacterial and fungal abundant taxa were ubiquitous across all karst tiankeng soils . Abundant microbes occupy wide niches and exhibit the characteristics of ecological persistence and effectively adapt to altered environments . However, the bacterial and fungal rare taxa were exhibit not evenly distributed in karst tiankeng soils. These endemic rare microbes were mainly due to the unique habitat of each karst tiankeng. Complex co-occurrence networks reveal microbial interactions with specific ecological niches . In both bacterial and fungal co-occurrence networks , the proportion of positive associations was dominant, indicating the occurrence of microbial mutualism to adapt to the habitat conditions of karst tiankeng. The abundant taxa occupy the central position of co-occurrence networks, which facilitates the growth of other species. The keystone species in microbial co-occurrence networks mainly belong to Proteobacteria , Actinobacteriota , and Gemmatimonadota (bacteria) and Ascomycota and Mortierellomycota (fungi) (Table S2). Diverse species occupy different ecological niches and are involved in different ecological processes . These findings indicate that abundant and rare taxa have distinct distributions, and keystone species may play an important role in regulating community function. Bacterial rather than fungal life strategies are associated with karst tiankeng area and isolation In this study, we found that the soil bacterial rrn operon copy number exhibited positive relationships with karst tiankeng area. Specifically, in the small karst tiankeng, the bacterial rrn operon copy number is low . The increase in rrn operon copy number indicated that the karst tiankeng area increased shifted the bacterial community from K - to r -strategists and indicated that the bacterial community favored the r -strategy in bigger karst tiankeng. Furthermore, our study is consistent with previous studies exhibiting that microbial life strategies are closely related to soil properties . Small karst tiankengs are characterized by lower vegetation coverage and intense edge effects, resulting in a decrease in soil water content and nutrients and appearing to be more conducive to bacterial communities with relatively slow renewal rates ( K -strategists). Mantel tests and Spearman correlation analysis also confirmed these results . The relatively wetter and rich labile nutrient environments of large karst tiankeng favor r -strategists with relatively fast renewal rates (Table S6). In the microbial network, keystones belong to abundant taxa, such as Acidobacteriota ( K -strategists), Gemmatimonadota ( r -strategists), Ascomycota ( r -strategists), and Mortierellomycota ( r -strategists), and play a key role in the shift in microbial life strategy. Our findings suggest that soil property is an important determinant of bacterial life strategies in karst tiankeng. The soil bacterial rrn operon copy number also exhibited positive relationships with karst tiankeng depth (isolation). The more isolated karst tiankeng should maintain the internal microclimate, and the warming and humidification effect is more significant . Previous studies have shown that K -strategists are more inclined to live in arid soil ecosystems and tend to develop various functions to adapt to arid conditions . In our study, the more isolated karst tiankengs are characterized by higher soil moisture. In addition, previous studies have confirmed that human interference can lead to shifts in soil microbial life strategies . Due to the isolation of vertical cliffs, the more isolated karst tiankeng, the more conducive to reducing human interference. Unlike bacteria, the fungal life strategies were not affected by the karst tiankeng area and isolation. The previous studies also indicated that soil properties influenced bacterial trait-based strategies, but did not limit fungal life strategies . Together, our results suggest that the area and isolation of karst tiankeng affect the life strategies of bacteria, not fungi. Functional attributes of abundant and rare taxa in the karst tiankeng Microbial communities are core contributors to ecosystem function and play an important role in biogeochemical processes, including nutrient cycling and fixation . By annotating the functions of abundant and rare taxa, the abundant taxa showed higher activity in some major metabolic functions, while the rare taxa contained more diversity of metabolic functions . Furthermore, the potential hosts for distinct functional genes were identified by network analysis, and the results exhibited that the most abundant taxa were the key potential hosts of functional genes for the biogeochemical cycle (e.g., C, N, and P cycles) in karst tiankengs . Our study reveals that abundant bacteria dominate the biogeochemical functions in karst tiankeng soils. Similarly, previous studies have found that abundant bacteria usually act as active contributors in biogeochemical cycles . The bacterial rrn operon copy number is often regarded as an indicator of nutrient utilization efficiency and survival strategies for individual organisms . The rare taxa had a higher rrn operon copy number than the abundant taxa, suggesting that abundant taxa had a stronger nutrient utilization efficiency. Our network analysis results also confirmed these results . Abundant taxa included fewer species, and their biochemical functions were diverse. In contrast, rare taxa possess relatively homogeneous biogeochemical functions with a higher number of species. These results indicate that different life strategies of abundant and rare taxa result in differences in metabolic capacity and growth rate . Through network analysis, our result showed that K -strategists represented by Actinobacteriota and Acidobacteriota and r strategists represented by Firmicutes were widely involved in soil C, N, and P cycling in karst tiankengs. Habitat loss is accompanied by a shift in the life strategies of soil bacteria. The abundant bacteria in small karst tiankeng behaved more like K -strategists and preferred affinity for recalcitrant C substrates. The abundant bacteria in small karst tiankeng are more easily adapted to environmental fluctuations . Small karst tiankeng is characterized as more susceptible to disturbance from the outside of the karst tiankeng environment (e.g., rocky desertification) and human interference . Habitat loss leads to the reduction of habitat quality and promotes the shift of soil bacteria from r -strategist to K -strategist. K -strategists can achieve maximum growth efficiency in resource-constrained environments, but this higher metabolic capacity may lead to an increased rate of nutrient depletion, leading to ecosystem degradation . Conclusions In this study, we conducted an amplicon sequencing to investigate the composition, structure, and life strategies of soil bacteria and fungi in karst tiankeng. Proteobacteria and Acidobacteriota (bacteria) and Ascomycota and Basidiomycota (fungi) were the most abundant phyla across all samples. The bacterial and fungal abundant taxa were ubiquitous in all karst tiankengs, while rare taxa exhibited a high proportion of unique species per individual karst tiankeng because of the heterogeneous habitat. The increase of karst tiankeng area and isolation shifted the bacterial community towards r -strategy, and among-karst tiankeng differences in soil properties generate the bacterial life strategies-area relationship. However, fungal life strategies did not exhibit a significant correlation with karst tiankeng and isolation. Abundant taxa play a keystone role in the co-occurrence network and are closely related to nutrient cycling and shift of bacterial life strategy. It is worth noting that some unmeasured factors may influence microbial life strategies, including historical factors and vegetation characteristics. Further studies need to systematically consider the multiple factors that influence microbial life strategies. Overall, our study has implications for a better understanding of the characteristics of soil bacterial and fungal communities and the shift of life strategies in the karst tiankeng ecosystem. In our study, the bacterial communities in the karst tiankeng were dominated by Proteobacteria and Acidobacteriota . Proteobacteria participate in ecological and phylogenetic values and play a crucial role in energy metabolism . Acidobacteriota species are considered to perform the function of organic matter decomposition and nutrient cycles . In addition, the fungal abundant phyla, including Ascomycota and Basidiomycota , may play a key role in soil aggregation and nutrient uptake . This result suggested that microbes involved in the nutrient cycle and energy metabolism might survive well in karst tiankeng. However, no significant differences were observed between the two karst tiankeng groups. This phenomenon may illustrate that these soil microbes were well adapted to the habitat conditions of karst tiankeng. The bacteria and fungal community structures exhibited differences between individual karst tiankeng and between two karst tiankeng groups (Fig. S2). Regarding the bacterial and fungal abundant taxa, there was no significant change among the different individual karst tiankeng, suggesting that bacterial and fungal abundant taxa were ubiquitous across all karst tiankeng soils . Abundant microbes occupy wide niches and exhibit the characteristics of ecological persistence and effectively adapt to altered environments . However, the bacterial and fungal rare taxa were exhibit not evenly distributed in karst tiankeng soils. These endemic rare microbes were mainly due to the unique habitat of each karst tiankeng. Complex co-occurrence networks reveal microbial interactions with specific ecological niches . In both bacterial and fungal co-occurrence networks , the proportion of positive associations was dominant, indicating the occurrence of microbial mutualism to adapt to the habitat conditions of karst tiankeng. The abundant taxa occupy the central position of co-occurrence networks, which facilitates the growth of other species. The keystone species in microbial co-occurrence networks mainly belong to Proteobacteria , Actinobacteriota , and Gemmatimonadota (bacteria) and Ascomycota and Mortierellomycota (fungi) (Table S2). Diverse species occupy different ecological niches and are involved in different ecological processes . These findings indicate that abundant and rare taxa have distinct distributions, and keystone species may play an important role in regulating community function. In this study, we found that the soil bacterial rrn operon copy number exhibited positive relationships with karst tiankeng area. Specifically, in the small karst tiankeng, the bacterial rrn operon copy number is low . The increase in rrn operon copy number indicated that the karst tiankeng area increased shifted the bacterial community from K - to r -strategists and indicated that the bacterial community favored the r -strategy in bigger karst tiankeng. Furthermore, our study is consistent with previous studies exhibiting that microbial life strategies are closely related to soil properties . Small karst tiankengs are characterized by lower vegetation coverage and intense edge effects, resulting in a decrease in soil water content and nutrients and appearing to be more conducive to bacterial communities with relatively slow renewal rates ( K -strategists). Mantel tests and Spearman correlation analysis also confirmed these results . The relatively wetter and rich labile nutrient environments of large karst tiankeng favor r -strategists with relatively fast renewal rates (Table S6). In the microbial network, keystones belong to abundant taxa, such as Acidobacteriota ( K -strategists), Gemmatimonadota ( r -strategists), Ascomycota ( r -strategists), and Mortierellomycota ( r -strategists), and play a key role in the shift in microbial life strategy. Our findings suggest that soil property is an important determinant of bacterial life strategies in karst tiankeng. The soil bacterial rrn operon copy number also exhibited positive relationships with karst tiankeng depth (isolation). The more isolated karst tiankeng should maintain the internal microclimate, and the warming and humidification effect is more significant . Previous studies have shown that K -strategists are more inclined to live in arid soil ecosystems and tend to develop various functions to adapt to arid conditions . In our study, the more isolated karst tiankengs are characterized by higher soil moisture. In addition, previous studies have confirmed that human interference can lead to shifts in soil microbial life strategies . Due to the isolation of vertical cliffs, the more isolated karst tiankeng, the more conducive to reducing human interference. Unlike bacteria, the fungal life strategies were not affected by the karst tiankeng area and isolation. The previous studies also indicated that soil properties influenced bacterial trait-based strategies, but did not limit fungal life strategies . Together, our results suggest that the area and isolation of karst tiankeng affect the life strategies of bacteria, not fungi. Microbial communities are core contributors to ecosystem function and play an important role in biogeochemical processes, including nutrient cycling and fixation . By annotating the functions of abundant and rare taxa, the abundant taxa showed higher activity in some major metabolic functions, while the rare taxa contained more diversity of metabolic functions . Furthermore, the potential hosts for distinct functional genes were identified by network analysis, and the results exhibited that the most abundant taxa were the key potential hosts of functional genes for the biogeochemical cycle (e.g., C, N, and P cycles) in karst tiankengs . Our study reveals that abundant bacteria dominate the biogeochemical functions in karst tiankeng soils. Similarly, previous studies have found that abundant bacteria usually act as active contributors in biogeochemical cycles . The bacterial rrn operon copy number is often regarded as an indicator of nutrient utilization efficiency and survival strategies for individual organisms . The rare taxa had a higher rrn operon copy number than the abundant taxa, suggesting that abundant taxa had a stronger nutrient utilization efficiency. Our network analysis results also confirmed these results . Abundant taxa included fewer species, and their biochemical functions were diverse. In contrast, rare taxa possess relatively homogeneous biogeochemical functions with a higher number of species. These results indicate that different life strategies of abundant and rare taxa result in differences in metabolic capacity and growth rate . Through network analysis, our result showed that K -strategists represented by Actinobacteriota and Acidobacteriota and r strategists represented by Firmicutes were widely involved in soil C, N, and P cycling in karst tiankengs. Habitat loss is accompanied by a shift in the life strategies of soil bacteria. The abundant bacteria in small karst tiankeng behaved more like K -strategists and preferred affinity for recalcitrant C substrates. The abundant bacteria in small karst tiankeng are more easily adapted to environmental fluctuations . Small karst tiankeng is characterized as more susceptible to disturbance from the outside of the karst tiankeng environment (e.g., rocky desertification) and human interference . Habitat loss leads to the reduction of habitat quality and promotes the shift of soil bacteria from r -strategist to K -strategist. K -strategists can achieve maximum growth efficiency in resource-constrained environments, but this higher metabolic capacity may lead to an increased rate of nutrient depletion, leading to ecosystem degradation . In this study, we conducted an amplicon sequencing to investigate the composition, structure, and life strategies of soil bacteria and fungi in karst tiankeng. Proteobacteria and Acidobacteriota (bacteria) and Ascomycota and Basidiomycota (fungi) were the most abundant phyla across all samples. The bacterial and fungal abundant taxa were ubiquitous in all karst tiankengs, while rare taxa exhibited a high proportion of unique species per individual karst tiankeng because of the heterogeneous habitat. The increase of karst tiankeng area and isolation shifted the bacterial community towards r -strategy, and among-karst tiankeng differences in soil properties generate the bacterial life strategies-area relationship. However, fungal life strategies did not exhibit a significant correlation with karst tiankeng and isolation. Abundant taxa play a keystone role in the co-occurrence network and are closely related to nutrient cycling and shift of bacterial life strategy. It is worth noting that some unmeasured factors may influence microbial life strategies, including historical factors and vegetation characteristics. Further studies need to systematically consider the multiple factors that influence microbial life strategies. Overall, our study has implications for a better understanding of the characteristics of soil bacterial and fungal communities and the shift of life strategies in the karst tiankeng ecosystem.
Quitting on TikTok: Effects of Message Themes, Frames, and Sources on Engagement with Vaping Cessation Videos
0149e51a-fd58-4de6-881c-c98f150d61d0
11606514
Health Communication[mh]
Health campaigns have increasingly utilized social media in recent decades to reach youth and young adults . Social media engagement is broadly defined as any action where users interact, share, and create content within their networks . In health campaigns using social media, engagement has also become commonplace in campaign evaluations, serving as a proxy for message effectiveness . Engagement as Part of Behavioral Change The Integrated Behavioral Model posits that positive attitudes, perceived social norms, and personal agency regarding a behavior predict behavioral intentions, which subsequently influence actual behavior . People like social media posts for various reasons, such as socializing, giving feedback, sharing interests, and enjoyment; however, liking generally indicates a direct expression of positive sentiment . Furthermore, individuals tend to share social media content that aligns with their beliefs . Therefore, liking and sharing a post on social media may signal audience interest and positive attitudes toward the content, potentially serving as a “priming step” to behavior change . Based on the Integrated Behavioral Model, positive comments about promoted health behaviors suggest a favorable attitude toward adopting the behavior, whereas negative comments may reflect reluctance to embrace the recommended behavior. Engagement as Persuasive Cues The bandwagon effect is when people conform to the behavior and attitudes of others due to the belief that such behavior and attitudes are popular, desirable, or socially acceptable . In the context of social media communication, bandwagon cues, such as a large number of likes, shares, and positive comments, can trigger the bandwagon effect by signaling popularity and social acceptance . For example, found that news headlines on Facebook with many likes were rated more credible than news with fewer likes. Health campaigns that received a greater number of positive comments were evaluated more favorably than those campaigns associated with more negative comments and fewer positive comments . Moreover, high shares increased perceptions of message influence and preventive health behavioral intentions . Therefore, engagement with social media health campaigns not only reflects how audiences respond to the post but also influences how the post is processed. This study focuses on metrics including positive engagement (i.e., likes, shares, positive comments about quitting vaping) and negative engagement (i.e., negative comments about quitting vaping) to identify effective features for future vaping cessation social media campaigns. Specifically, we focus on examining the effect of message source and content features, including message themes and frames on audience engagement with vaping cessation TikTok videos. The Integrated Behavioral Model posits that positive attitudes, perceived social norms, and personal agency regarding a behavior predict behavioral intentions, which subsequently influence actual behavior . People like social media posts for various reasons, such as socializing, giving feedback, sharing interests, and enjoyment; however, liking generally indicates a direct expression of positive sentiment . Furthermore, individuals tend to share social media content that aligns with their beliefs . Therefore, liking and sharing a post on social media may signal audience interest and positive attitudes toward the content, potentially serving as a “priming step” to behavior change . Based on the Integrated Behavioral Model, positive comments about promoted health behaviors suggest a favorable attitude toward adopting the behavior, whereas negative comments may reflect reluctance to embrace the recommended behavior. The bandwagon effect is when people conform to the behavior and attitudes of others due to the belief that such behavior and attitudes are popular, desirable, or socially acceptable . In the context of social media communication, bandwagon cues, such as a large number of likes, shares, and positive comments, can trigger the bandwagon effect by signaling popularity and social acceptance . For example, found that news headlines on Facebook with many likes were rated more credible than news with fewer likes. Health campaigns that received a greater number of positive comments were evaluated more favorably than those campaigns associated with more negative comments and fewer positive comments . Moreover, high shares increased perceptions of message influence and preventive health behavioral intentions . Therefore, engagement with social media health campaigns not only reflects how audiences respond to the post but also influences how the post is processed. This study focuses on metrics including positive engagement (i.e., likes, shares, positive comments about quitting vaping) and negative engagement (i.e., negative comments about quitting vaping) to identify effective features for future vaping cessation social media campaigns. Specifically, we focus on examining the effect of message source and content features, including message themes and frames on audience engagement with vaping cessation TikTok videos. Previous research has identified the following common themes in vaping-related health messages: 1) physical health outcomes , 2) mental health outcomes , 3) harmful chemicals in vape products , 4) nicotine addiction , 5) the negative social image associated with vaping , and 6) financial costs of vaping . Themes addressing nicotine addiction, harmful chemicals, and negative health outcomes led to higher perceived message effectiveness among youth . found that themes related to physical health outcomes were perceived as the most effective, surpassing themes on chemicals in vapes, mental health outcomes, and nicotine addiction. Additionally, nicotine addiction themes were less effective in eliciting negative affect compared to physical health effects and chemicals in vapes Notably, these theme-based studies pertained to vaping prevention instead of vaping cessation. The current study explores what message themes receive more engagement with TikTok vaping cessation videos. The following research question was proposed: RQ1: What are the associations between the six pre-identified themes and both positive and negative engagement with vaping cessation TikTok videos? Health messages can be framed to emphasize either the benefits of a behavior (gain frame) or the consequences of not engaging in it (loss frame) . Studies suggest that loss-framed messages are more persuasive for detection behaviors like cancer screening, while gain-framed messages are more effective for promoting prevention behaviors such as exercise or quitting tobacco products . Research on gain and loss frames in the context of vaping prevention has yielded mixed results . However, no studies have specifically examined the effects of gain and loss frames on promoting vaping cessation . Despite the distinctions between cigarette cessation and vaping cessation concerning the products involved, a previous meta-analysis suggests that gain-framed messages were more likely than loss-framed messages to encourage smoking cessation . Ratio of Gain and Loss Frames Previous experimental studies have predominantly focused on comparing pure gain-framed and loss-framed messages . However, in real-life scenarios, the incorporation of both gain and loss frames in health messages, particularly within the context of TikTok videos, is common. The Emotions-as-Frames model (EFM, , ) argues that loss-framed messages, emphasizing the negative consequences of not adopting recommended behaviors, tend to evoke negative emotions such as fear and guilt . Conversely, gain-framed messages are more likely to elicit positive emotions such as hope . Furthermore, EFM suggests positive emotions enhance the persuasive impact of gain framing, while negative emotions strengthen the influence of loss framing . Increasing the ratio of gain to loss frames in a message could intensify emotional responses. Given the documented advantage of gain frames in smoking cessation literature, we posit the following hypotheses: H1 : Vaping cessation TikTok videos with a higher ratio of gain frames elicit more positive social media engagement and less negative engagement than videos with a lower ratio of gain frames. H2 : Vaping cessation TikTok videos with a higher ratio of loss frames elicit less positive social media engagement and more negative engagement than videos with a lower ratio of loss frames. Previous experimental studies have predominantly focused on comparing pure gain-framed and loss-framed messages . However, in real-life scenarios, the incorporation of both gain and loss frames in health messages, particularly within the context of TikTok videos, is common. The Emotions-as-Frames model (EFM, , ) argues that loss-framed messages, emphasizing the negative consequences of not adopting recommended behaviors, tend to evoke negative emotions such as fear and guilt . Conversely, gain-framed messages are more likely to elicit positive emotions such as hope . Furthermore, EFM suggests positive emotions enhance the persuasive impact of gain framing, while negative emotions strengthen the influence of loss framing . Increasing the ratio of gain to loss frames in a message could intensify emotional responses. Given the documented advantage of gain frames in smoking cessation literature, we posit the following hypotheses: H1 : Vaping cessation TikTok videos with a higher ratio of gain frames elicit more positive social media engagement and less negative engagement than videos with a lower ratio of gain frames. H2 : Vaping cessation TikTok videos with a higher ratio of loss frames elicit less positive social media engagement and more negative engagement than videos with a lower ratio of loss frames. A message source is the individual, group, or organization that the audience perceives as the communication originator . The characteristics of a message source can contribute to attitudinal and behavioral change through two psychological processes: internalization and identification . The internalization process can be manifested in terms of the expertise of message sources; formal experts, like healthcare professionals, can increase vaping risk perceptions among young adults . In addition, recent research has acknowledged the persuasive effects of informal experts, which are individuals who have firsthand experience (i.e., experiential expertise) with specific health issues . In vaping cessation, individuals who have successfully quit possess informal expertise, drawing on their firsthand experiences and knowledge of the quitting process. Identification is enhanced by source homophily, where similarities in beliefs, values, and social status between sender and recipient strengthen message impact . Although the literature on youth preferences for vaping cessation sources is limited, research shows youth smokers prefer messages from peers who smoke . When the recipient of a message perceives themselves to be relatable to the sender, the persuasive impact of the message tends to be stronger . Thus, current e-cigarette users might be effective message sources for vaping cessation campaigns. Given the inability to determine the vaping and quitting status of TikTok video viewers and the lack of research on different message sources in vaping cessation videos, we have developed the following research questions to evaluate the influence of various message sources on engagement. RQ2 : Do videos featuring formal experts, informal experts, and current e-cigarette users receive greater positive engagement and less negative engagement than videos that do not incorporate these message sources? RQ3 : Which of the message sources (formal experts, informal experts, current e-cigarette users) generate the highest positive engagement and the least negative engagement in vaping cessation TikTok videos? Study Design and Data Collection Using an open-source TikTok scraping tool , we collected all publicly available TikTok videos containing the hashtags #quitvaping and/or #quitvape posted between January 1 st , 2022, and December 31 st , 2022. In total, we collected 1,709 public TikTok videos, including associated metadata such as the number of video diggs (i.e., likes), comments, and follower counts. The comments associated with the 1,709 TikTok videos were collected, resulting in a total of 47,879 comments. We randomly sampled 50% of the 1,709 videos ( N = 855) for the content analysis. The Institutional Review Board at a major university in the northeastern US exempted this study from review because it involved non-human subjects and used publicly available data. Sampling and Inclusion Criteria We first coded if the video was in English. Videos that were not in English were excluded from further analysis. Next, we determined the relevance of each video to vaping cessation. Only videos that explicitly mentioned quitting e-cigarettes were considered relevant to our study. For instance, videos that offered advice on quitting, shared personal experiences of quitting, or discussed the benefits of quitting were deemed relevant to quitting vaping. displays the sampling procedure used in this study. Intercoder Reliability To attain high coding reliability, two coders were first trained on 50 videos that were not included in the sampled video dataset. Discrepancies were discussed to resolve coding disagreements in three separate meetings. Next, two coders independently coded 10% of the sample data ( N = 86) for inter-coder reliability. Coding agreements were assessed with Cohen’s Kappa values, which were above 0.7 across all content variables, indicating a high level of intercoder reliability . The two trained coders then independently coded the rest of the videos. displays the inter-coder reliability. Video Coding Features – Predictor Variables The coding of message frames is contingent on message themes, as a frame can only be properly understood within the context of a specific theme. Therefore, we coded the presence/absence of six gain and/or loss-framed themes related to vaping from previous studies: 1) physical health outcomes; 2) mental health outcomes; 3) harmful chemicals in vape products; 4) nicotine addiction; 5) negative social image associated with and 6) financial costs of vaping. A video could contain both gain and loss-framed messages across six specific themes. Thus, a total of 12 gain/loss-framed themes were coded for each video. Presence of Six Message Themes The presence of each of the six themes was determined based on the inclusion of gain or loss-framed messages related to the coded theme. Ratio of Gain Frames We calculated the ratio of gain frames by dividing the number of gain-framed themes by the total number of present gain/loss-framed themes. Ratio of Loss Frames Similarly, we calculated the ratio of loss frames by dividing the number of loss-framed themes by the total number of present gain or loss-framed themes. Message Source A message source was categorized as a formal expert source (i.e., healthcare professionals) if the main character in a video introduced themselves as a healthcare professional or wore medical professional attire (e.g., white coats, scrub tops). In addition, a message source was determined as an informal expert (i.e., individuals who have successfully quit vaping) if the main character in the video indicated they had successfully quit vaping. Lastly, a message source was classified as a current user message source if the s character disclosed current e-cigarette use. Videos that did not contain any of the above three message sources were categorized as having non-expert and non-user sources. Video Engagement - Outcome Variables Numbers of Likes and Shares The number of likes and shares a video received was obtained during the scraping of the videos. Positive and Negative Comments About Quitting Vaping. To evaluate the sentiment of comments about quitting vaping, we conducted aspect-based sentiment analysis (ABSA) on all videos with at least one comment. In ABSA, “aspects” are attributes or components discussed in the text. We analyzed 47,879 comments using ABSApp , identifying 152 initial aspects. ABSApp provided examples of text strings for each aspect, which guided us in manually selecting six relevant terms for quitting vaping: quit, journey, choice, quitting, decisions, and decision. We excluded irrelevant aspects such as years, anyone, dude, dreams, and kids. We calculated aspect-based sentiment for each comment using an off-the-shelf LSA T -DeBERTa model. LSA T -DeBERTa demonstrates state-of-the-art performance across various natural language processing tasks by effectively capturing contextual information and semantic relationships within the text. The model achieves a macro-average performance score of 85% on multiple public datasets . The model provided probabilities for negative, neutral, and positive sentiments. For instance, “Nicotine has nothing to do with our anxiety, I quit back in February and I’m just as anxious and depressed as I was before” was categorized as negative to quitting vaping, “How did you quit?” as neutral, and “I want to quit so badly, not sure why I keep putting it off” as positive. Comments were assigned to the sentiment category with the highest probability. We then summed the number of positive and negative comments about quitting vaping for each video with at least one relevant comment. We validated the model’s predictions on aspect sentiment regarding quitting vaping by manually coding 15% of the examined comments. The validation metrics demonstrate good performance, with an accuracy of 81.08%. Details of the validation process and results are provided in the . Statistical Analyses Mixed-effect negative binomial models were utilized to test hypotheses and research questions, with each engagement metric (likes, positive and negative comments regarding quitting vaping, shares) treated as outcome variables respectively. The models included the following predictors: 1) the presence or absence of each of the six message themes, 2) a four-level categorical variable indicating the type of message source, and 3) a continuous variable representing the ratio of gain/loss frames in the video. To avoid multicollinearity, the ratio of gain frames and loss frames was entered separately as predictors, along with two other predictor variables in each of the negative binomial models. The analyses were conducted using R (Version 1.4.1106) and the R package glmmADMB. All models included random effects of TikTok users and were adjusted for variables that could affect video engagement, including TikTok account follower counts (per thousand), video length (in seconds), and the total numbers of gain and loss-framed themes in the video. Videos featuring at least one of the six identified themes were included in the negative binomial analysis of likes and shares. Additionally, videos that mentioned at least one theme and received at least one comment were analyzed for positive and negative comments about quitting vaping. Using an open-source TikTok scraping tool , we collected all publicly available TikTok videos containing the hashtags #quitvaping and/or #quitvape posted between January 1 st , 2022, and December 31 st , 2022. In total, we collected 1,709 public TikTok videos, including associated metadata such as the number of video diggs (i.e., likes), comments, and follower counts. The comments associated with the 1,709 TikTok videos were collected, resulting in a total of 47,879 comments. We randomly sampled 50% of the 1,709 videos ( N = 855) for the content analysis. The Institutional Review Board at a major university in the northeastern US exempted this study from review because it involved non-human subjects and used publicly available data. We first coded if the video was in English. Videos that were not in English were excluded from further analysis. Next, we determined the relevance of each video to vaping cessation. Only videos that explicitly mentioned quitting e-cigarettes were considered relevant to our study. For instance, videos that offered advice on quitting, shared personal experiences of quitting, or discussed the benefits of quitting were deemed relevant to quitting vaping. displays the sampling procedure used in this study. To attain high coding reliability, two coders were first trained on 50 videos that were not included in the sampled video dataset. Discrepancies were discussed to resolve coding disagreements in three separate meetings. Next, two coders independently coded 10% of the sample data ( N = 86) for inter-coder reliability. Coding agreements were assessed with Cohen’s Kappa values, which were above 0.7 across all content variables, indicating a high level of intercoder reliability . The two trained coders then independently coded the rest of the videos. displays the inter-coder reliability. The coding of message frames is contingent on message themes, as a frame can only be properly understood within the context of a specific theme. Therefore, we coded the presence/absence of six gain and/or loss-framed themes related to vaping from previous studies: 1) physical health outcomes; 2) mental health outcomes; 3) harmful chemicals in vape products; 4) nicotine addiction; 5) negative social image associated with and 6) financial costs of vaping. A video could contain both gain and loss-framed messages across six specific themes. Thus, a total of 12 gain/loss-framed themes were coded for each video. Presence of Six Message Themes The presence of each of the six themes was determined based on the inclusion of gain or loss-framed messages related to the coded theme. Ratio of Gain Frames We calculated the ratio of gain frames by dividing the number of gain-framed themes by the total number of present gain/loss-framed themes. Ratio of Loss Frames Similarly, we calculated the ratio of loss frames by dividing the number of loss-framed themes by the total number of present gain or loss-framed themes. Message Source A message source was categorized as a formal expert source (i.e., healthcare professionals) if the main character in a video introduced themselves as a healthcare professional or wore medical professional attire (e.g., white coats, scrub tops). In addition, a message source was determined as an informal expert (i.e., individuals who have successfully quit vaping) if the main character in the video indicated they had successfully quit vaping. Lastly, a message source was classified as a current user message source if the s character disclosed current e-cigarette use. Videos that did not contain any of the above three message sources were categorized as having non-expert and non-user sources. The presence of each of the six themes was determined based on the inclusion of gain or loss-framed messages related to the coded theme. We calculated the ratio of gain frames by dividing the number of gain-framed themes by the total number of present gain/loss-framed themes. Similarly, we calculated the ratio of loss frames by dividing the number of loss-framed themes by the total number of present gain or loss-framed themes. A message source was categorized as a formal expert source (i.e., healthcare professionals) if the main character in a video introduced themselves as a healthcare professional or wore medical professional attire (e.g., white coats, scrub tops). In addition, a message source was determined as an informal expert (i.e., individuals who have successfully quit vaping) if the main character in the video indicated they had successfully quit vaping. Lastly, a message source was classified as a current user message source if the s character disclosed current e-cigarette use. Videos that did not contain any of the above three message sources were categorized as having non-expert and non-user sources. Numbers of Likes and Shares The number of likes and shares a video received was obtained during the scraping of the videos. Positive and Negative Comments About Quitting Vaping. To evaluate the sentiment of comments about quitting vaping, we conducted aspect-based sentiment analysis (ABSA) on all videos with at least one comment. In ABSA, “aspects” are attributes or components discussed in the text. We analyzed 47,879 comments using ABSApp , identifying 152 initial aspects. ABSApp provided examples of text strings for each aspect, which guided us in manually selecting six relevant terms for quitting vaping: quit, journey, choice, quitting, decisions, and decision. We excluded irrelevant aspects such as years, anyone, dude, dreams, and kids. We calculated aspect-based sentiment for each comment using an off-the-shelf LSA T -DeBERTa model. LSA T -DeBERTa demonstrates state-of-the-art performance across various natural language processing tasks by effectively capturing contextual information and semantic relationships within the text. The model achieves a macro-average performance score of 85% on multiple public datasets . The model provided probabilities for negative, neutral, and positive sentiments. For instance, “Nicotine has nothing to do with our anxiety, I quit back in February and I’m just as anxious and depressed as I was before” was categorized as negative to quitting vaping, “How did you quit?” as neutral, and “I want to quit so badly, not sure why I keep putting it off” as positive. Comments were assigned to the sentiment category with the highest probability. We then summed the number of positive and negative comments about quitting vaping for each video with at least one relevant comment. We validated the model’s predictions on aspect sentiment regarding quitting vaping by manually coding 15% of the examined comments. The validation metrics demonstrate good performance, with an accuracy of 81.08%. Details of the validation process and results are provided in the . The number of likes and shares a video received was obtained during the scraping of the videos. Positive and Negative Comments About Quitting Vaping. To evaluate the sentiment of comments about quitting vaping, we conducted aspect-based sentiment analysis (ABSA) on all videos with at least one comment. In ABSA, “aspects” are attributes or components discussed in the text. We analyzed 47,879 comments using ABSApp , identifying 152 initial aspects. ABSApp provided examples of text strings for each aspect, which guided us in manually selecting six relevant terms for quitting vaping: quit, journey, choice, quitting, decisions, and decision. We excluded irrelevant aspects such as years, anyone, dude, dreams, and kids. We calculated aspect-based sentiment for each comment using an off-the-shelf LSA T -DeBERTa model. LSA T -DeBERTa demonstrates state-of-the-art performance across various natural language processing tasks by effectively capturing contextual information and semantic relationships within the text. The model achieves a macro-average performance score of 85% on multiple public datasets . The model provided probabilities for negative, neutral, and positive sentiments. For instance, “Nicotine has nothing to do with our anxiety, I quit back in February and I’m just as anxious and depressed as I was before” was categorized as negative to quitting vaping, “How did you quit?” as neutral, and “I want to quit so badly, not sure why I keep putting it off” as positive. Comments were assigned to the sentiment category with the highest probability. We then summed the number of positive and negative comments about quitting vaping for each video with at least one relevant comment. We validated the model’s predictions on aspect sentiment regarding quitting vaping by manually coding 15% of the examined comments. The validation metrics demonstrate good performance, with an accuracy of 81.08%. Details of the validation process and results are provided in the . To evaluate the sentiment of comments about quitting vaping, we conducted aspect-based sentiment analysis (ABSA) on all videos with at least one comment. In ABSA, “aspects” are attributes or components discussed in the text. We analyzed 47,879 comments using ABSApp , identifying 152 initial aspects. ABSApp provided examples of text strings for each aspect, which guided us in manually selecting six relevant terms for quitting vaping: quit, journey, choice, quitting, decisions, and decision. We excluded irrelevant aspects such as years, anyone, dude, dreams, and kids. We calculated aspect-based sentiment for each comment using an off-the-shelf LSA T -DeBERTa model. LSA T -DeBERTa demonstrates state-of-the-art performance across various natural language processing tasks by effectively capturing contextual information and semantic relationships within the text. The model achieves a macro-average performance score of 85% on multiple public datasets . The model provided probabilities for negative, neutral, and positive sentiments. For instance, “Nicotine has nothing to do with our anxiety, I quit back in February and I’m just as anxious and depressed as I was before” was categorized as negative to quitting vaping, “How did you quit?” as neutral, and “I want to quit so badly, not sure why I keep putting it off” as positive. Comments were assigned to the sentiment category with the highest probability. We then summed the number of positive and negative comments about quitting vaping for each video with at least one relevant comment. We validated the model’s predictions on aspect sentiment regarding quitting vaping by manually coding 15% of the examined comments. The validation metrics demonstrate good performance, with an accuracy of 81.08%. Details of the validation process and results are provided in the . Mixed-effect negative binomial models were utilized to test hypotheses and research questions, with each engagement metric (likes, positive and negative comments regarding quitting vaping, shares) treated as outcome variables respectively. The models included the following predictors: 1) the presence or absence of each of the six message themes, 2) a four-level categorical variable indicating the type of message source, and 3) a continuous variable representing the ratio of gain/loss frames in the video. To avoid multicollinearity, the ratio of gain frames and loss frames was entered separately as predictors, along with two other predictor variables in each of the negative binomial models. The analyses were conducted using R (Version 1.4.1106) and the R package glmmADMB. All models included random effects of TikTok users and were adjusted for variables that could affect video engagement, including TikTok account follower counts (per thousand), video length (in seconds), and the total numbers of gain and loss-framed themes in the video. Videos featuring at least one of the six identified themes were included in the negative binomial analysis of likes and shares. Additionally, videos that mentioned at least one theme and received at least one comment were analyzed for positive and negative comments about quitting vaping. Descriptive Analysis Results The 412 videos received over 83 million views on TikTok, with an average of 203,201 views per video ( SD = 677,793). Videos received a mean of 248 comments (SD = 924, Mdn = 28, IQR = 89), 21185 likes (SD = 72,775, Mdn = 1,408, IQR = 5,119), and 368 shares (SD = 1,541, Mdn = 11, IQR = 76). The mean number of positive comments about quitting was 3 (SD = 7, Mdn = 1, IQR = 3), and the mean number of negative comments about quitting was 3 (SD = 7, Mdn = 1, IQR = 4). Message Themes and Frames presents the presence of twelve gain- and loss-framed themes in English-language vaping cessation videos ( N = 412). The most common theme was nicotine addiction, followed by physical health, mental health, harmful chemicals in vapes, financial impacts of vaping, and negative social perceptions of vaping. Exploratory inductive coding of the 135 videos without these six themes revealed that 56 (41%) featured individuals discussing their decision to quit vaping (see ). provides examples of gain and loss-framed messages for each theme. Among the 277 videos containing at least one of the identified themes, the average ratio of gain frames was 0.29 (SD = 0.37), while the ratio of loss frames was 0.71 (SD = 0.37). Message Sources Among the coded videos, 10 (2.4%) videos featured formal experts. Additional string-matching analyses using keywords like “doctor,” and “MD” did not find additional formal expert videos . Furthermore, 54 (13.1%) videos showed informal experts, who indicated that they have successfully quit vaping, while 241 (58.5%) videos portrayed current e-cigarette user sources. Lastly, 107 (26.0%) videos included non-expert and non-user sources. Predicting Video Engagement with Message Themes, Frames, and Sources displays the results of mixed-effect negative binomial regression models. Effects of Six Message Themes on Video Engagement RQ1 examined the effects of six distinct message themes on video engagement. Negative binomial regression results revealed that the presence of the chemical theme was associated with both more negative (IRR = 2.74, p = .02, 95% CI = 1.15, 6.52) and positive comments (IRR = 2.15, p = .05, 95% CI = 1.01, 4.56) about quitting vaping. Additionally, the physical health theme was linked to more likes (IRR = 3.30, p = .01, 95% CI = 1.39, 7.86) and shares (IRR = 5.11, p = .003, 95% CI = 1.74, 15.05), while the addiction theme received more likes (IRR = 2.76, p = .05, 95% CI = 1.01, 7.50). Effects of Gain and Loss Frames on Video Engagement H1 proposed that a higher ratio of gain frames to the total number of gain and loss frames in a video would predict increased positive engagement and reduced negative engagement. The results suggest that videos with a higher ratio of gain frames elicited more likes (IRR = 2.79, p = .01, 95% CI = 1.23, 6.30), positive comments about quitting vaping (IRR = 1.86, p = .04, 95% CI = 1.04, 3.33), and more shares (IRR = 3.51, p = .01, 95% CI = 1.35, 9.12). However, no significant association was found between negative comments and the ratio of gain frames (IRR = 0.32, p = 1.40, 95% CI = 0.72, 2.72). Therefore, H1 was partially supported. H2 proposed that a higher ratio of loss frames in a video would predict decreased positive engagement and increased negative engagement. The results suggest that videos with a higher ratio of loss frames elicited fewer likes (IRR = 0.36, p = .01, 95% CI = 0.16, 0.81), fewer positive comments about quitting vaping (IRR = 0.54, p = .04, 95% CI = 0.30, 0.96), and fewer shares (IRR = 0.28, p = .01, 95% CI = 0.11, 0.74). Additionally, no significant association was found between negative comments and the ratio of loss frames (IRR = 0.71, p = .32, 95% CI = 0.37, 1.38). Therefore, H2 was partially supported. Effects of Message Sources on Video Engagement RQ2 investigated that if TikTok vaping cessation videos featuring formal experts (i.e., healthcare professionals), informal experts (i.e., individuals who have successfully quit vaping), and current user sources (i.e., individuals who currently use e-cigarettes) generate more positive engagement and less negative engagement compared to videos featuring non-expert and non-user sources. Findings from negative binomial regressions showed that non-expert and non-user sources received fewer likes (IRR = 0.45, p = .04, 95% CI = 0.21, 0.97) than current user sources. In addition, non-expert and non-user videos were associated with more negative comments about quitting vaping than informal experts who have successfully quit vaping (IRR = 2.61, p = .03, 95% CI = 1.12, 6.07). RQ3 asked which of the three message sources (formal experts, informal experts, current user sources) generate the highest engagement compared to one another. The results indicated that informal expert sources received both fewer positive comments (IRR = 0.40, p = .005, 95% CI = 0.21, 0.76) and fewer negative comments (IRR = 0.31, p = .002, 95% CI = 0.15, 0.64) about vaping than current user sources. No other significant differences were observed in video engagement when comparing the three types of message sources. The 412 videos received over 83 million views on TikTok, with an average of 203,201 views per video ( SD = 677,793). Videos received a mean of 248 comments (SD = 924, Mdn = 28, IQR = 89), 21185 likes (SD = 72,775, Mdn = 1,408, IQR = 5,119), and 368 shares (SD = 1,541, Mdn = 11, IQR = 76). The mean number of positive comments about quitting was 3 (SD = 7, Mdn = 1, IQR = 3), and the mean number of negative comments about quitting was 3 (SD = 7, Mdn = 1, IQR = 4). Message Themes and Frames presents the presence of twelve gain- and loss-framed themes in English-language vaping cessation videos ( N = 412). The most common theme was nicotine addiction, followed by physical health, mental health, harmful chemicals in vapes, financial impacts of vaping, and negative social perceptions of vaping. Exploratory inductive coding of the 135 videos without these six themes revealed that 56 (41%) featured individuals discussing their decision to quit vaping (see ). provides examples of gain and loss-framed messages for each theme. Among the 277 videos containing at least one of the identified themes, the average ratio of gain frames was 0.29 (SD = 0.37), while the ratio of loss frames was 0.71 (SD = 0.37). Message Sources Among the coded videos, 10 (2.4%) videos featured formal experts. Additional string-matching analyses using keywords like “doctor,” and “MD” did not find additional formal expert videos . Furthermore, 54 (13.1%) videos showed informal experts, who indicated that they have successfully quit vaping, while 241 (58.5%) videos portrayed current e-cigarette user sources. Lastly, 107 (26.0%) videos included non-expert and non-user sources. presents the presence of twelve gain- and loss-framed themes in English-language vaping cessation videos ( N = 412). The most common theme was nicotine addiction, followed by physical health, mental health, harmful chemicals in vapes, financial impacts of vaping, and negative social perceptions of vaping. Exploratory inductive coding of the 135 videos without these six themes revealed that 56 (41%) featured individuals discussing their decision to quit vaping (see ). provides examples of gain and loss-framed messages for each theme. Among the 277 videos containing at least one of the identified themes, the average ratio of gain frames was 0.29 (SD = 0.37), while the ratio of loss frames was 0.71 (SD = 0.37). Among the coded videos, 10 (2.4%) videos featured formal experts. Additional string-matching analyses using keywords like “doctor,” and “MD” did not find additional formal expert videos . Furthermore, 54 (13.1%) videos showed informal experts, who indicated that they have successfully quit vaping, while 241 (58.5%) videos portrayed current e-cigarette user sources. Lastly, 107 (26.0%) videos included non-expert and non-user sources. displays the results of mixed-effect negative binomial regression models. Effects of Six Message Themes on Video Engagement RQ1 examined the effects of six distinct message themes on video engagement. Negative binomial regression results revealed that the presence of the chemical theme was associated with both more negative (IRR = 2.74, p = .02, 95% CI = 1.15, 6.52) and positive comments (IRR = 2.15, p = .05, 95% CI = 1.01, 4.56) about quitting vaping. Additionally, the physical health theme was linked to more likes (IRR = 3.30, p = .01, 95% CI = 1.39, 7.86) and shares (IRR = 5.11, p = .003, 95% CI = 1.74, 15.05), while the addiction theme received more likes (IRR = 2.76, p = .05, 95% CI = 1.01, 7.50). Effects of Gain and Loss Frames on Video Engagement H1 proposed that a higher ratio of gain frames to the total number of gain and loss frames in a video would predict increased positive engagement and reduced negative engagement. The results suggest that videos with a higher ratio of gain frames elicited more likes (IRR = 2.79, p = .01, 95% CI = 1.23, 6.30), positive comments about quitting vaping (IRR = 1.86, p = .04, 95% CI = 1.04, 3.33), and more shares (IRR = 3.51, p = .01, 95% CI = 1.35, 9.12). However, no significant association was found between negative comments and the ratio of gain frames (IRR = 0.32, p = 1.40, 95% CI = 0.72, 2.72). Therefore, H1 was partially supported. H2 proposed that a higher ratio of loss frames in a video would predict decreased positive engagement and increased negative engagement. The results suggest that videos with a higher ratio of loss frames elicited fewer likes (IRR = 0.36, p = .01, 95% CI = 0.16, 0.81), fewer positive comments about quitting vaping (IRR = 0.54, p = .04, 95% CI = 0.30, 0.96), and fewer shares (IRR = 0.28, p = .01, 95% CI = 0.11, 0.74). Additionally, no significant association was found between negative comments and the ratio of loss frames (IRR = 0.71, p = .32, 95% CI = 0.37, 1.38). Therefore, H2 was partially supported. Effects of Message Sources on Video Engagement RQ2 investigated that if TikTok vaping cessation videos featuring formal experts (i.e., healthcare professionals), informal experts (i.e., individuals who have successfully quit vaping), and current user sources (i.e., individuals who currently use e-cigarettes) generate more positive engagement and less negative engagement compared to videos featuring non-expert and non-user sources. Findings from negative binomial regressions showed that non-expert and non-user sources received fewer likes (IRR = 0.45, p = .04, 95% CI = 0.21, 0.97) than current user sources. In addition, non-expert and non-user videos were associated with more negative comments about quitting vaping than informal experts who have successfully quit vaping (IRR = 2.61, p = .03, 95% CI = 1.12, 6.07). RQ3 asked which of the three message sources (formal experts, informal experts, current user sources) generate the highest engagement compared to one another. The results indicated that informal expert sources received both fewer positive comments (IRR = 0.40, p = .005, 95% CI = 0.21, 0.76) and fewer negative comments (IRR = 0.31, p = .002, 95% CI = 0.15, 0.64) about vaping than current user sources. No other significant differences were observed in video engagement when comparing the three types of message sources. RQ1 examined the effects of six distinct message themes on video engagement. Negative binomial regression results revealed that the presence of the chemical theme was associated with both more negative (IRR = 2.74, p = .02, 95% CI = 1.15, 6.52) and positive comments (IRR = 2.15, p = .05, 95% CI = 1.01, 4.56) about quitting vaping. Additionally, the physical health theme was linked to more likes (IRR = 3.30, p = .01, 95% CI = 1.39, 7.86) and shares (IRR = 5.11, p = .003, 95% CI = 1.74, 15.05), while the addiction theme received more likes (IRR = 2.76, p = .05, 95% CI = 1.01, 7.50). H1 proposed that a higher ratio of gain frames to the total number of gain and loss frames in a video would predict increased positive engagement and reduced negative engagement. The results suggest that videos with a higher ratio of gain frames elicited more likes (IRR = 2.79, p = .01, 95% CI = 1.23, 6.30), positive comments about quitting vaping (IRR = 1.86, p = .04, 95% CI = 1.04, 3.33), and more shares (IRR = 3.51, p = .01, 95% CI = 1.35, 9.12). However, no significant association was found between negative comments and the ratio of gain frames (IRR = 0.32, p = 1.40, 95% CI = 0.72, 2.72). Therefore, H1 was partially supported. H2 proposed that a higher ratio of loss frames in a video would predict decreased positive engagement and increased negative engagement. The results suggest that videos with a higher ratio of loss frames elicited fewer likes (IRR = 0.36, p = .01, 95% CI = 0.16, 0.81), fewer positive comments about quitting vaping (IRR = 0.54, p = .04, 95% CI = 0.30, 0.96), and fewer shares (IRR = 0.28, p = .01, 95% CI = 0.11, 0.74). Additionally, no significant association was found between negative comments and the ratio of loss frames (IRR = 0.71, p = .32, 95% CI = 0.37, 1.38). Therefore, H2 was partially supported. RQ2 investigated that if TikTok vaping cessation videos featuring formal experts (i.e., healthcare professionals), informal experts (i.e., individuals who have successfully quit vaping), and current user sources (i.e., individuals who currently use e-cigarettes) generate more positive engagement and less negative engagement compared to videos featuring non-expert and non-user sources. Findings from negative binomial regressions showed that non-expert and non-user sources received fewer likes (IRR = 0.45, p = .04, 95% CI = 0.21, 0.97) than current user sources. In addition, non-expert and non-user videos were associated with more negative comments about quitting vaping than informal experts who have successfully quit vaping (IRR = 2.61, p = .03, 95% CI = 1.12, 6.07). RQ3 asked which of the three message sources (formal experts, informal experts, current user sources) generate the highest engagement compared to one another. The results indicated that informal expert sources received both fewer positive comments (IRR = 0.40, p = .005, 95% CI = 0.21, 0.76) and fewer negative comments (IRR = 0.31, p = .002, 95% CI = 0.15, 0.64) about vaping than current user sources. No other significant differences were observed in video engagement when comparing the three types of message sources. This study investigated how message themes, frames, and sources impact engagement with user-generated vaping cessation videos on TikTok. The primary themes in TikTok videos were physical health outcomes and nicotine addiction. On average, the videos featured a higher ratio of loss-framed messages over gain-framed messages. Additionally, over half of the videos featured individuals who disclosed current e-cigarette use, followed by non-expert non-user sources, informal experts who successfully quit, and formal experts such as doctors. Engagement with Vaping Cessation TikTok Videos Themes and Video Engagement Nicotine addiction emerged as the most prevalent theme, correlating with higher positive engagement (likes). Physical health, the second most common theme, also showed a positive correlation with positive engagement (likes and shares). Given that likes often indicate positive audience sentiment , the increased correlation between likes and both nicotine addiction and physical health themes suggests potential effectiveness in future social media vaping cessation campaigns. Sharing health-related information on social media can be driven by a desire to spread knowledge and show care for others . Our findings suggest that people might regard physical health as significant enough to share within their networks. Future vaping cessation campaigns aim at increasing awareness and engagement with the issue of vaping cessation could emphasize the physical health effects of vaping. Incorporating the theme of harmful chemicals in vaping products led to more positive comments about quitting, consistent with previous research on its effectiveness in prevention messages . However, the theme of harmful chemicals also generated more negative comments about quitting. Previous research found that cigarette pack messages about toxic chemicals did not increase intentions to quit smoking, but increased awareness of chemicals and health harms . Further research is needed to understand the effects of the chemical theme in vaping cessation and moderators that might affect the message effect. Frames and Engagement Aligning with the detection/prevention behavioral classification in gain and loss framing effects , our study found that a higher ratio of gain frames in vaping cessation videos was associated with increased likes, shares, and positive comments about quitting vaping. The benefits of incorporating gain frames may be explained by the heuristic processing of social media posts . Individuals who rely on heuristic processing prefer positive information while avoiding negative information, consistent with the hedonic principle . As the effectiveness of gain frames in persuasion depends on the intensity of positive emotions evoked , future TikTok vaping cessation campaigns may benefit from incorporating more gain-framed messages to maximize engagement . However, our results indicate that gain frames were not associated with reduced negative comments about quitting vaping compared to loss frames. Future research should explore why negative comments arise in response to social media health campaigns, considering factors like message reactance and personal agency , to decrease negative engagement among audiences. Sources and Engagement When examining the effects of different message sources on video engagement, our study revealed an advantage in utilizing potentially relatable message sources who currently vape and informal expert sources. Vaping cessation videos featuring current users garnered more likes than those from non-expert, non-user sources. Additionally, videos featuring successful quitters received more positive comments compared to those featuring current users. Prior research has shown that “current teenaged smoker” and “successful teenaged quitter” were the top two preferred message sources for smoking cessation videos among youth . Our study suggests that both current user and informal expert sources may effectively influence the audience’s attitudes toward quitting vaping. Contrary to the hypothesis based on the internalization process of persuasion , our study found that formal expert sources such as doctors were not associated with more positive engagement. One possible explanation for the unexpected results could be the relatively small sample size of videos featuring formal expert sources ( N = 10). Further research is needed to evaluate the effectiveness of including formal experts, like healthcare professionals, in vaping cessation TikTok videos. Implications and Limitations of Using Engagement as Proxy Measures of Campaign Effectiveness Drawing on the Integrated Behavioral Model and the bandwagon effect , engagement metrics such as likes, shares, and comments may reflect audience perceptions of recommended behaviors, potentially precede behavioral change, and serve as persuasive cues in social media campaigns. For example, liking a brand on social media does not always result in purchasing the product . Therefore, while high engagement with health campaigns might signal positive sentiment, researchers have cautioned that such engagement does not always lead to meaningful attitude shifts or sustained behavior change . Moreover, engagement can also be influenced by factors unrelated to persuasion, such as entertainment value or peer influence . Research gaps include the aggregation of engagement types into a single score and a lack of focus on negative engagement, such as negative comments . Our study contributes to the literature by examining different engagement types and distinguishing positive and negative comments toward recommended health behaviors. However, a clearer theoretical understanding of the reasons and outcomes of engagement with social media health campaigns is still needed . Longitudinal and observational studies that link social media engagement to real-life health attitudes and behaviors could provide deeper insights. Our study has limitations. Given our specific focus on TikTok vaping cessation videos, the findings may not apply to other social media platforms. Due to the content analysis nature, we lacked data on audience vaping status and age, preventing the examination of causal links between video exposure and quitting behaviors. Additionally, we were unable to study specific persuasive outcomes, nor did we analyze audience emotional responses to the videos. Moreover, it is essential to recognize that video engagement does not guarantee video persuasiveness. Our study suggests that future TikTok vaping cessation campaigns could benefit from incorporating themes related to physical health, addiction, harmful chemicals, and gain-framed messages. Additionally, utilizing message sources current e-cigarette users and individuals who have successfully quit vaping, might enhance campaign engagement. The effectiveness of featuring formal experts, such as healthcare professionals, in vaping cessation TikTok videos warrants further research. Themes and Video Engagement Nicotine addiction emerged as the most prevalent theme, correlating with higher positive engagement (likes). Physical health, the second most common theme, also showed a positive correlation with positive engagement (likes and shares). Given that likes often indicate positive audience sentiment , the increased correlation between likes and both nicotine addiction and physical health themes suggests potential effectiveness in future social media vaping cessation campaigns. Sharing health-related information on social media can be driven by a desire to spread knowledge and show care for others . Our findings suggest that people might regard physical health as significant enough to share within their networks. Future vaping cessation campaigns aim at increasing awareness and engagement with the issue of vaping cessation could emphasize the physical health effects of vaping. Incorporating the theme of harmful chemicals in vaping products led to more positive comments about quitting, consistent with previous research on its effectiveness in prevention messages . However, the theme of harmful chemicals also generated more negative comments about quitting. Previous research found that cigarette pack messages about toxic chemicals did not increase intentions to quit smoking, but increased awareness of chemicals and health harms . Further research is needed to understand the effects of the chemical theme in vaping cessation and moderators that might affect the message effect. Frames and Engagement Aligning with the detection/prevention behavioral classification in gain and loss framing effects , our study found that a higher ratio of gain frames in vaping cessation videos was associated with increased likes, shares, and positive comments about quitting vaping. The benefits of incorporating gain frames may be explained by the heuristic processing of social media posts . Individuals who rely on heuristic processing prefer positive information while avoiding negative information, consistent with the hedonic principle . As the effectiveness of gain frames in persuasion depends on the intensity of positive emotions evoked , future TikTok vaping cessation campaigns may benefit from incorporating more gain-framed messages to maximize engagement . However, our results indicate that gain frames were not associated with reduced negative comments about quitting vaping compared to loss frames. Future research should explore why negative comments arise in response to social media health campaigns, considering factors like message reactance and personal agency , to decrease negative engagement among audiences. Sources and Engagement When examining the effects of different message sources on video engagement, our study revealed an advantage in utilizing potentially relatable message sources who currently vape and informal expert sources. Vaping cessation videos featuring current users garnered more likes than those from non-expert, non-user sources. Additionally, videos featuring successful quitters received more positive comments compared to those featuring current users. Prior research has shown that “current teenaged smoker” and “successful teenaged quitter” were the top two preferred message sources for smoking cessation videos among youth . Our study suggests that both current user and informal expert sources may effectively influence the audience’s attitudes toward quitting vaping. Contrary to the hypothesis based on the internalization process of persuasion , our study found that formal expert sources such as doctors were not associated with more positive engagement. One possible explanation for the unexpected results could be the relatively small sample size of videos featuring formal expert sources ( N = 10). Further research is needed to evaluate the effectiveness of including formal experts, like healthcare professionals, in vaping cessation TikTok videos. Implications and Limitations of Using Engagement as Proxy Measures of Campaign Effectiveness Drawing on the Integrated Behavioral Model and the bandwagon effect , engagement metrics such as likes, shares, and comments may reflect audience perceptions of recommended behaviors, potentially precede behavioral change, and serve as persuasive cues in social media campaigns. For example, liking a brand on social media does not always result in purchasing the product . Therefore, while high engagement with health campaigns might signal positive sentiment, researchers have cautioned that such engagement does not always lead to meaningful attitude shifts or sustained behavior change . Moreover, engagement can also be influenced by factors unrelated to persuasion, such as entertainment value or peer influence . Research gaps include the aggregation of engagement types into a single score and a lack of focus on negative engagement, such as negative comments . Our study contributes to the literature by examining different engagement types and distinguishing positive and negative comments toward recommended health behaviors. However, a clearer theoretical understanding of the reasons and outcomes of engagement with social media health campaigns is still needed . Longitudinal and observational studies that link social media engagement to real-life health attitudes and behaviors could provide deeper insights. Our study has limitations. Given our specific focus on TikTok vaping cessation videos, the findings may not apply to other social media platforms. Due to the content analysis nature, we lacked data on audience vaping status and age, preventing the examination of causal links between video exposure and quitting behaviors. Additionally, we were unable to study specific persuasive outcomes, nor did we analyze audience emotional responses to the videos. Moreover, it is essential to recognize that video engagement does not guarantee video persuasiveness. Our study suggests that future TikTok vaping cessation campaigns could benefit from incorporating themes related to physical health, addiction, harmful chemicals, and gain-framed messages. Additionally, utilizing message sources current e-cigarette users and individuals who have successfully quit vaping, might enhance campaign engagement. The effectiveness of featuring formal experts, such as healthcare professionals, in vaping cessation TikTok videos warrants further research. Nicotine addiction emerged as the most prevalent theme, correlating with higher positive engagement (likes). Physical health, the second most common theme, also showed a positive correlation with positive engagement (likes and shares). Given that likes often indicate positive audience sentiment , the increased correlation between likes and both nicotine addiction and physical health themes suggests potential effectiveness in future social media vaping cessation campaigns. Sharing health-related information on social media can be driven by a desire to spread knowledge and show care for others . Our findings suggest that people might regard physical health as significant enough to share within their networks. Future vaping cessation campaigns aim at increasing awareness and engagement with the issue of vaping cessation could emphasize the physical health effects of vaping. Incorporating the theme of harmful chemicals in vaping products led to more positive comments about quitting, consistent with previous research on its effectiveness in prevention messages . However, the theme of harmful chemicals also generated more negative comments about quitting. Previous research found that cigarette pack messages about toxic chemicals did not increase intentions to quit smoking, but increased awareness of chemicals and health harms . Further research is needed to understand the effects of the chemical theme in vaping cessation and moderators that might affect the message effect. Aligning with the detection/prevention behavioral classification in gain and loss framing effects , our study found that a higher ratio of gain frames in vaping cessation videos was associated with increased likes, shares, and positive comments about quitting vaping. The benefits of incorporating gain frames may be explained by the heuristic processing of social media posts . Individuals who rely on heuristic processing prefer positive information while avoiding negative information, consistent with the hedonic principle . As the effectiveness of gain frames in persuasion depends on the intensity of positive emotions evoked , future TikTok vaping cessation campaigns may benefit from incorporating more gain-framed messages to maximize engagement . However, our results indicate that gain frames were not associated with reduced negative comments about quitting vaping compared to loss frames. Future research should explore why negative comments arise in response to social media health campaigns, considering factors like message reactance and personal agency , to decrease negative engagement among audiences. When examining the effects of different message sources on video engagement, our study revealed an advantage in utilizing potentially relatable message sources who currently vape and informal expert sources. Vaping cessation videos featuring current users garnered more likes than those from non-expert, non-user sources. Additionally, videos featuring successful quitters received more positive comments compared to those featuring current users. Prior research has shown that “current teenaged smoker” and “successful teenaged quitter” were the top two preferred message sources for smoking cessation videos among youth . Our study suggests that both current user and informal expert sources may effectively influence the audience’s attitudes toward quitting vaping. Contrary to the hypothesis based on the internalization process of persuasion , our study found that formal expert sources such as doctors were not associated with more positive engagement. One possible explanation for the unexpected results could be the relatively small sample size of videos featuring formal expert sources ( N = 10). Further research is needed to evaluate the effectiveness of including formal experts, like healthcare professionals, in vaping cessation TikTok videos. Drawing on the Integrated Behavioral Model and the bandwagon effect , engagement metrics such as likes, shares, and comments may reflect audience perceptions of recommended behaviors, potentially precede behavioral change, and serve as persuasive cues in social media campaigns. For example, liking a brand on social media does not always result in purchasing the product . Therefore, while high engagement with health campaigns might signal positive sentiment, researchers have cautioned that such engagement does not always lead to meaningful attitude shifts or sustained behavior change . Moreover, engagement can also be influenced by factors unrelated to persuasion, such as entertainment value or peer influence . Research gaps include the aggregation of engagement types into a single score and a lack of focus on negative engagement, such as negative comments . Our study contributes to the literature by examining different engagement types and distinguishing positive and negative comments toward recommended health behaviors. However, a clearer theoretical understanding of the reasons and outcomes of engagement with social media health campaigns is still needed . Longitudinal and observational studies that link social media engagement to real-life health attitudes and behaviors could provide deeper insights. Our study has limitations. Given our specific focus on TikTok vaping cessation videos, the findings may not apply to other social media platforms. Due to the content analysis nature, we lacked data on audience vaping status and age, preventing the examination of causal links between video exposure and quitting behaviors. Additionally, we were unable to study specific persuasive outcomes, nor did we analyze audience emotional responses to the videos. Moreover, it is essential to recognize that video engagement does not guarantee video persuasiveness. Our study suggests that future TikTok vaping cessation campaigns could benefit from incorporating themes related to physical health, addiction, harmful chemicals, and gain-framed messages. Additionally, utilizing message sources current e-cigarette users and individuals who have successfully quit vaping, might enhance campaign engagement. The effectiveness of featuring formal experts, such as healthcare professionals, in vaping cessation TikTok videos warrants further research. Supplementary Material
Evaluation of metabolite stability in dried blood spot stored at different temperatures and times
6b072a42-aebf-48d6-9138-f46cb1546e7a
11680943
Biochemistry[mh]
Metabolomics, the comprehensive study of metabolites within biological systems, is pivotal for elucidating biochemical processes, understanding disease mechanisms, and evaluating drug responses . This analytical approach offers valuable insights into the overall phenotype of biological systems. Blood stands out as one of the most frequently utilized biological fluids in medical diagnostics and research purposes due to its accessibility, complex composition, and ability to reflect systemic physiological states . However, blood sample collection presents various challenges, such as the high costs of sampling procedures, the need for specialized handling post-collection, and logistical difficulties in transportation and storage, particularly in remote regions. Consequently, with the evolution of personalized precision medicine and population health research frameworks, developing a convenient sampling method and enhancing patient engagement for expanding bioanalytical capabilities in healthcare is necessary. Dried blood spot (DBS) sampling, tracing back to its inception in 1963 , has primarily been utilized for neonatal disease screening and health assessment . Beyond its traditional role in newborn screening, DBS has found applications in diverse fields, including environmental contaminant tracking, drug monitoring, genomics, and proteomics – . Recently, DBS has garnered increased attention for its utility in metabolomics, particularly for biomarker discovery and disease diagnostics – . DBS presents an enticing alternative sampling approach for clinical applications, thanks to its notable benefits including reduced sample volume, minimal invasiveness, the feasibility of home-based sampling, and easy transportability . Despite the significant advantages of DBS techniques in various fields, preserving metabolite stability during transportation and storage remains a significant challenge. It is widely recognized that storing plasma or serum samples at -80℃ results in minimal alteration of metabolites . Research has shown that storing DBS at temperatures of either − 20 °C or -80 °C effectively preserves most metabolites for at least 2 years . Furthermore, research has indicated that storing DBS at -20 °C for one year is advantageous for preserving metabolite stability . Moreover, it has been demonstrated that certain metabolites in DBS exhibit varying degrees of stability after 168 days of storage under different temperature conditions. Notably, lipid metabolites are less stable at higher temperatures, while amino acid metabolites show relatively moderate bidirectional concentration changes . In contrast, most amino acids in DBS showed significant degradation after one year of storage at 4 °C, followed by four years at room temperature . Notably, the majority (71%) of the metabolites remained stable during 10 years of storage at -20 °C, although lipid metabolites exhibited a decreasing trend . However, the intervals and delivery conditions from sample collection to metabolomics analysis often do not match the ideal conditions. While controlled studies on metabolite stability in DBS have primarily focused on elevated temperatures such as room temperature , our understanding of the detailed stability of specific metabolites remains limited. The storage conditions of DBS samples may have an adverse effect on the metabolite profiles, particularly affecting representative biomarker candidates and potentially compromising the reliability of DBS samples for clinical diagnosis applications. To improve the practicality of DBS sampling in clinical metabolomics, it is essential to identify which representative metabolites remain stable or unstable under various storage conditions, ensuring the scientific validity and accuracy of clinical studies. In this study, we investigated the stability of metabolites in DBS samples stored under three different environmental conditions (4℃, 25℃, and 40℃) at various time points (3, 7, 14, and 21 days) using multi-platform untargeted metabolomics. We assessed the stability of identified metabolites in DBS samples by comparing the peak intensities of each time point at different temperatures against those in the controls (0 day). Our analysis yielded a comprehensive understanding of specific metabolites in DBS samples stored at different temperatures and durations, providing valuable insights for study design, standardizing biomarker selection, and improving data quality. Metabolites detection and identification of DBS samples Metabolites are highly dynamic molecules that can degrade over time, especially when exposed to environmental factors such as diverse temperatures , , . After home-based sampling, however, it is essential to recognize that during the transportation process from sample collection to mass spectrometry detection, DBS may encounter less than optimal conditions. Therefore, comprehensive understanding of metabolite stability from sample collection to analysis under adverse environmental conditions, such as 25 °C (mimicking the room temperature) and 40 °C, is necessary. Furthermore, Filippos et al. found that the DBS stored at room temperature exhibited poor stability over time, contrasting with relatively stable compositions observed at lower temperatures . In addition, Trifonova et al. found that clinically relevant compounds such as creatine, glucose, carnitine, glutamine exhibited alterations of less than 15% (Relative Standard Deviation (RSD) < 15%) during four weeks of storage at room temperature . However, these studies only provided annotations for a subset of metabolites, lacking extensive discussions on the stability of the large number of identified metabolites. To broaden the range of detectable metabolites, we employed both positive and negative modes UHPLC-MS and GC-MS-based multi-platform untargeted metabolomics, in conjunction with a method of extraction that separates hydrophilic and hydrophobic compounds . This comprehensive approach included metabolites of all polarities. Ultimately, we detected 1106 metabolic features, then we assigned 353 metabolites classified in subclasses, such as amino acids, carbohydrates, nucleotides, organic acids, peptides, ceramides, fatty acids, lysophosphatidylcholines (LysoPCs), lysophosphatidylethanolamines (LysoPEs), monoether phosphatidylcholines (MePCs), phosphatidylcholines (PCs), phosphatidylethanolamines (PEs), sphingomyelins (SMs), triglycerides (TAGs), and Others (Table and S1). Metabolites are highly dynamic molecules that can degrade over time, especially when exposed to environmental factors such as diverse temperatures , , . After home-based sampling, however, it is essential to recognize that during the transportation process from sample collection to mass spectrometry detection, DBS may encounter less than optimal conditions. Therefore, comprehensive understanding of metabolite stability from sample collection to analysis under adverse environmental conditions, such as 25 °C (mimicking the room temperature) and 40 °C, is necessary. Furthermore, Filippos et al. found that the DBS stored at room temperature exhibited poor stability over time, contrasting with relatively stable compositions observed at lower temperatures . In addition, Trifonova et al. found that clinically relevant compounds such as creatine, glucose, carnitine, glutamine exhibited alterations of less than 15% (Relative Standard Deviation (RSD) < 15%) during four weeks of storage at room temperature . However, these studies only provided annotations for a subset of metabolites, lacking extensive discussions on the stability of the large number of identified metabolites. To broaden the range of detectable metabolites, we employed both positive and negative modes UHPLC-MS and GC-MS-based multi-platform untargeted metabolomics, in conjunction with a method of extraction that separates hydrophilic and hydrophobic compounds . This comprehensive approach included metabolites of all polarities. Ultimately, we detected 1106 metabolic features, then we assigned 353 metabolites classified in subclasses, such as amino acids, carbohydrates, nucleotides, organic acids, peptides, ceramides, fatty acids, lysophosphatidylcholines (LysoPCs), lysophosphatidylethanolamines (LysoPEs), monoether phosphatidylcholines (MePCs), phosphatidylcholines (PCs), phosphatidylethanolamines (PEs), sphingomyelins (SMs), triglycerides (TAGs), and Others (Table and S1). The data were analyzed by unsupervised principal component analysis (PCA), which showed the overall effect of storage temperatures. The scores plot generated from the PCA model distinctly illustrated the differentiation of DBS samples based on storage temperatures (Fig. A). In addition, the score plot derived from partial least squares discriminant analysis (PLS-DA) also exhibited a clear separation among samples stored at different temperatures (Figure A). Furthermore, we explored the impact of storage durations on metabolite composition in DBS samples stored at different temperatures. We also performed a PCA to get a first overview on the data set (Fig. B–D). The PCA scores plot revealed that the samples from the same days formed distinct clusters, and the five clusters could be separated by the first principal component (PC1) at 4 °C, 25 °C, and 40 °C, respectively. As illustrated by the loading plots, the main factors driving the separation of the five clusters in the PC1 were PCs and TAGs (Figure B–D). Indeed, specifically PCs with carbon chain lengths of 34, 36, and 38, as well as TAGs with carbon chain lengths of 50 and 52, demonstrated markedly decreased metabolite intensities according to the storage times (Figure ). During the same time period, notably increased metabolite intensities were observed for LysoPCs at 25 °C and 40 °C, which comprised the predominant species in the second principal component (PC2). These results suggest that the metabolic profiles can be distinguished based on storage temperatures, and significant time-dependent changes occurred in metabolite intensities. Based on previously published studies, metabolites are deemed stable if the relative standard deviation (RSD) of each metabolite remains below 15% or 20% during storage , – . To distinguish stable and unstable metabolites, we assessed changes in metabolite intensities relative to the reference (0 day), considering an RSD greater than 15% as indicative of instability (Table ). We determined the counts of metabolites identified as unstable in DBS samples following storage for 3, 7, 14, and 21 days at three different storage temperatures (4 °C, 25 °C, and 40 °C). The majority of metabolites remained stable, except for organic acids, PCs, TAGs, and PEs (Table ). We observed that the alterations in these metabolites at each time point exceeded 4.2% regardless of temperatures, with TAGs exhibiting the most significant variation, exceeding 5.7%. At both 25 °C and 40 °C, we observed significant alterations in LysoPCs and carbohydrates, exceeding 4.5% over 7 days and 14 days, respectively. Studies have shown that PCs, TAGs, PEs, and LysoPCs often contain ester bonds and/or unsaturated bonds, making them susceptible to degradation under conditions, such as exposure to moisture, elevated temperatures, or suboptimal storage environments , . Furthermore, amino acids showed instability when stored at 40 °C for over 14 days. Studies have indicated that the instability of amino acids at elevated temperatures is primarily attributed to their propensity for complex chemical transformations, including pyrolysis, dehydration, oxidation, and polymerization. These reactions may alter the molecular structure of amino acids, significantly affecting their chemical properties and biological functionality . Conversely, for other subclasses such as nucleotides, peptides, ceramides, fatty acids, LysoPEs, MePCs, and SMs, regardless of temperatures, the number of changes was less than 4%, indicating relative stability. Additionally, from 3 days to 21 days, there was no significant increase in quantity, all being less than 4%, suggesting that the number of unstable metabolites did not markedly change after 3 days. During 21 days of long-term storage, we found that the intensities of 188, 130, and 81 metabolites changed by less than 15% (RSD < 15%) at 4 °C, 25 °C, and 40 °C, respectively (Fig. A). Among them, sixty-nine out of 353 metabolites remained stable even when stored at three different temperatures (Fig. B). These included 15 (21.7%) lipids, 9 (13.0%) amino acids, 8 (11.6%) carbohydrates, 10 (14.5%) nucleotides, and 16 (23.2%) organic acids, respectively. These metabolites can be effectively utilized as biomarker candidates in biological functions. Furthermore, the remaining 61 metabolites exhibited stability at 25 °C and can serve as dependable metabolic molecules for disease risk assessment or prognosis evaluation (Figure S3A). If we consider that the typical shipping time is less than 3 days, extending the time criterion to 3 days would result in approximately 186 metabolites remaining stable at 25 °C (see Table ). In contrast, we identified 85, 149, and 169 metabolites whose intensities changed by more than 15% regardless of the storage temperatures (Fig. C). Among them, 78 metabolites exhibited instability at all three different temperatures, comprising 69 (88.5%) lipids, 3 (3.8%) carbohydrates, and 5 (6.4%) organic acids, respectively (Fig. D). These results suggested that lipids are relatively unstable, and the aforementioned 78 metabolites should be excluded from consideration as biomarker candidates. Additionally, 71 metabolites showed instability at 25 °C and should also be avoided for biomarker purposes (Figure S3B). We also identified four clinically relevant compounds and obtained similar results . For instance, the degradation of creatine initiated after 7 days in our dataset, consistent with their results. Additionally, the degradation of glucose and carnitine began after 7 days and steadily declined until the end of the 3-week period. Interestingly, previous studies have identified glutamine as the most stability-sensitive of the 23 amino acids, with significant degradation observed after three weeks of storage at room temperature . In contrast, our findings reveal that glutamine stability was compromised as early as seven days under similar conditions, likely attributable to the activity of glutaminase . We observed that certain cancer-associated metabolites, such as xanthine and hypoxanthine . , , exhibited stability across diverse temperatures and storage periods, indicating their potential as promising biomarker candidates for cancer diagnosis. An even higher rate of degradation has been observed for carnitine and acetyl-carnitine across all storage temperatures , which consistent with our findings. Similarly, our results indicate that lysoPCs exhibited a tendency toward degradation, aligning with earlier reports . From the stable and unstable metabolites, comprising 69 and 78 metabolites respectively, we selected those with HMDB IDs and conducted pathway enrichment analysis separately (Fig. E). We found that pathways related to valine, leucine, and isoleucine biosynthesis/degradation, citrate cycle, alanine, aspartate and glutamate metabolism, arginine biosynthesis, and glyoxylate and dicarboxylate metabolism could be reliably analyzed using DBS samples. In contrast, pathways associated with glycine, serine and threonine metabolism, as well as cysteine and methionine metabolism, showed significant alterations, suggesting their unsuitability for pathway analysis. Given the variability already defined above, certain metabolites may be present a consistent trend of alteration in metabolite intensities. To assess this, we conducted linear correlation analysis based on storage time at each temperature. Employing a threshold of |R| ≧ 0.9, we identified a total of 8, 47, and 77 metabolites showing higher correlations at 4 °C, 25 °C, and 40 °C, respectively (see Table ). All of these metabolites exhibited an RSD greater than 15% compared to the reference metabolite intensities for at least one day. Subsequently, we conducted heatmap analysis, which revealed that most metabolites exhibited gradual increases or decreases over time at three different storage temperatures (4 °C, 25 °C, and 40 °C) (Fig. A–C). Notably, among them, 2,3,4-trihydroxybutyric acid, pyruvic acid, and 2-hydroxyvaleric acid showed gradual increases, yet displayed insignificant changes until 7 days later at 4 °C. Conversely, o-hydroxyhippuric acid, glucose, nicotinamide adenine dinucleotide, phosphocreatine, and alpha-ketoisovaleric acid presented the trends of decrement and exhibited insignificant changes until after 7 days at 25 °C. Interestingly, all types of PCs, PEs, and TAGs metabolites displayed a gradual decrease, in contrast to the increase observed in LysoPCs, indicating a propensity for degradation of PCs, PEs, and TAGs into LysoPC forms at 25 °C. Moreover, metabolites exhibiting R values below 0.9 and consistently maintaining an RSD below 15% across all time points, despite not demonstrating significant changes presently, could potentially undergo notable alterations with an extended time frame. Blood, plasma, and serum, which collectively offer a comprehensive reflection of biological systems, have proven invaluable in providing mechanistic insights into the diagnosis of various clinical diseases and identifying potential metabolic biomarkers . The logistical simplicity of DBS collection, storage, and transportation has led to a substantial rise in interest regarding the utilization of DBS in application of clinical diagnosis. The storage conditions of DBS samples may adversely affect the metabolite profiles, particularly for representative biomarker candidates, potentially impacting the reliability of DBS samples for clinical diagnosis applications. To improve the practicality of DBS sampling methods for clinical metabolomics, it is essential to identify which representative metabolites remain stable or unstable under different storage conditions, ensuring the scientific validity and accuracy of clinical studies. Therefore, studying the stability of metabolites and gaining a comprehensive understanding of the differences in metabolite levels in DBS samples stored at various temperatures, ranging from refrigerated to elevated temperatures, is crucial. In current study, DBS samples were obtained from the same participant to minimize biological variability. These samples were stored at three different temperatures (4 °C, 25 °C, and 40 °C) and extracted at four different time points: 3, 7, 14, and 21 days after storage, in order to evaluate the stability of metabolites over time. Ultimately, we detected 1106 features and identified 353 metabolites (see Table ), encompassing nearly all classes of metabolites found in plasma. Furthermore, we observed that approximately 130 metabolites remained stable at 25 °C for the entire 21-day duration. These metabolites show potential as biomarker candidates for metabolomic clinical research. Our results offer valuable insights into the impact of pre-storage conditions on metabolite profiles, facilitating robust untargeted metabolomics studies using both novel and traditional micro-blood samples. In addition, stable metabolites can be confidently utilized in DBS-based metabolomics for clinical application, while unstable metabolites should be avoided. However, there are various limitations to this study. (1) The information provided regarding storage temperature and duration is inherently constrained by the metabolites that were measured. (2) It would be even better to have a DBS stored at −80 °C as a control at each time point. (3) Further investigation incorporated with desiccants is essential to determine whether the degradation of metabolites in DBS is caused by residual moisture. Overall, our study provided a thorough understanding of specific metabolites in DBS samples stored at different temperatures and durations. To our knowledge, this study is the first comprehensive investigation into the detailed stability of hundreds of metabolites under varying storage conditions. Nevertheless, this study will offer valuable insights for study design, standardizing biomarker selection, and enhancing data quality. Chemical and reagents MS-grade methanol, acetonitrile (ACN), formic acid, water, ammonium acetate, and HPLC-grade methyl tert-butyl ether (MTBE) were purchased from Sigma-Aldrich (St. Louis, MO, USA). HPLC-grade isopropanol (IPA) was obtained from Thermo Fisher Scientific Co., Ltd (Shanghai, China). Whatman 903TM protein saver cards, utilized for DBS sample preparation, were obtained from Whatman (Maidstone, UK). The internal standards, including gibberellic acid A3, 13 C sorbitol, and PE (17:0/17:0) were acquired from Sigma-Aldrich (St. Louis, MO, USA). Sample preparation For the DBS storage stability study, samples were all collected from a single individual simultaneously to minimize any potential biological variability. This study was approved by the Research Ethics Committee and the informed consents were obtained from the participant. Venipuncture was conducted on the subject’s cubital vein under the fasting condition. Following collection, 50 µL of whole blood was promptly transferred onto Whatman 903TM protein saver cards. Subsequently, all samples underwent complete air drying for four hours at room temperature. The DBS paper cards were then individually enclosed in zip-closure foil bags and stored at temperatures of 4℃, 25℃, and 40℃ for durations of three days, one week, two weeks, and three weeks. To ensure robustness and reliability, triplicate samples were collected for each temperature and time interval. Furthermore, control samples were obtained at the initial time point (0 day) and stored at -80℃. Additionally, samples were collected on 3, 7, 14, and 21 days from each storage temperature. DBS samples retrieved at each time point were preserved at -80℃ until analysis throughout the study. Sample extraction For each specified time point (3, 7, 14, and 21 days) and temperature (4℃, 25℃, and 40℃), samples were retrieved from their respective storage environments and promptly preserved at -80℃ until analysis. DBS underwent processing by excising four dried blood slices, each with a diameter of 3 mm from a single dried spot, which were then directly transferred into individual microcentrifuge tubes. DBS samples were extracted as previously reported . In brief, DBS slices were mixed with 700 µL MTBE buffer containing 0.45 µg/mL of gibberellic acid A3, 1 µg/mL of 13 C sorbitol, and 0.45 µg/mL of PE (17:0/17:0) as internal standards. Gibberellic acid A3 and PE (17:0/17:0) were relatively used as the internal standard of hydrophilic and lipophilic phase detection LC-MS platform. In addition, 13 C sorbitol was applied for GC-MS platform. Internal standards were utilized to monitor the stability of extraction and the on-board process of the MS platform. The coefficient of variation (CV) for each substance less than or equal to 20% was considered as stable. Then, the samples were sonicated for 15 min in a 4 °C bath. Subsequently, 350 µL solution (methanol/water, v/v, 1:3) was added to facilitate phase separation. The upper lipophilic phases and lower hydrophilic phases were separated by high-speed centrifugation (12,700 rpm, 5 min at 4 °C, Centrifuge 5430R, Eppendorf, Germany). Next, 400 µL of the lower hydrophilic phase was further mixed with 1.1 mL of methanol, incubated at 4 °C for 1 h, and then centrifuged at 12,700 rpm for 10 min at 4 °C. The upper phase containing the lipophilic phase (350 µL) was transferred to a microcentrifuge tube for lipid analysis. The hydrophilic phase was divided into two microcentrifuge tubes, one containing 350 µL and the other containing 1000 µL, intended for GC-MS and UHPLC-MS metabolite analysis, respectively. All aliquots were dried using a speed vacuum concentrator and stored at -80 °C. Before use, the dried samples of lipophilic and hydrophilic constituents were dissolved in 200 µL of ACN/IPA (v/v, 7:3) and water, respectively. Finally, they were transferred into sample vials for further analysis. To ensure measurement quality and equipment performance, three types of quality control (QC) samples were prepared according to the aforementioned procedure. These included a pooled sample consisting of 50% randomly selected DBS samples (QCbio), a mixture of chemical standards (QCmix), and a sample containing only solvents (QCblank). To identify impurities in the solvents or contamination in the separation system, the analytical batch run began with a QCblank. QCmix and QCbio samples were inserted after every ten biological samples during analysis. Sample detection by GC-MS and UHPLC-MS The derivatization of the dried hydrophilic metabolites was performed according to Lisec et al. . Briefly, the dried fractions were oximilised by methoxyammonium pyridine solution and derivatized by adding MSTFA for GC-MS analysis. The analysis of derivatization of the hydrophilic metabolites was conducted utilizing an Agilent 7890B gas chromatograph with an Rxi ® -5SilMS GC column (30 m, 0.25 × 30 mm, 0.25 μm), coupled to time-of-flight (TOF) mass spectrometer (Leco Corp., St. Joseph, MI, USA) with an electron ionization source by an Agilent 7683 series autosampler (Agilent Technologies GmbH, Waldbronn, Germany). High-purity helium was set as carrier gas with a flow rate of 1 mL/min. The temperature was initially set at 50 °C for 2 min, and then reaching 330 °C at a rate of 1 °C per minute. The ion source temperature and interface were set at 250 °C and 280 °C, respectively. The detector voltage and electron energy were set at 1.2KV and 70 eV, respectively. The analysis of lipophilic and hydrophilic extractions was performed using an ultra-high performance liquid chromatography (UHPLC, Waters ACQUITY (Milford, MA, USA) ) system coupled to Thermo-Fisher Q-Exactive (Bremen, Germany) mass spectrometers with an electrospray ionization (ESI) source under both positive and negative modes. For lipophilic samples, a BEH C8 column (1.7 μm, 2.1 × 100 mm) was used for chromatographic separation with a column temperature of 60 °C. Gradient elution was performed using a mobile phase A water and B ACN/IPA (70/30, V/V), both containing 0.1% acetic acid and 0.1% ammonium acetate at a flow rate of 0.4 mL/min. The optimal gradient elution program was set as follows: 55% B at 0–1 min, 75% B at 4 min, 89% B at 12 min, 100% B at 12 min, 100% B at 19.5 min, 55% B at 19.51 min, 55% B at 24 min. The injection volume of the sample was set at 2 µL with a temperature held at 10 °C. For hydrophilic samples, chromatographic separations were performed on a HSS T3 column (1.8 μm, 2.1 × 100 mm) with a column temperature of 40 °C. Gradient elution was carried out using mobile phase A water and B ACN, both containing 0.1% formic acid at a flow rate of 0.4 mL/min. The gradient elution program was operated as follows: 1% B at 0 min, 70% B at 13 min, 99% B at 13.01 min, 99% B at 18 min, 1% B at 18.01 min, and 1% B at 22 min. The injection volume of the sample was set at 3 µL with the temperature held at 10 °C. For both lipophilic and hydrophilic metabolite analyses, full scan and data-dependent acquisition (DDA) were employed to acquire MS and MS/MS spectra. Full scanning analyses were performed in the range of m/z 100–1500 Da. The DDA was carried out for targeting the top five most intense precursor ions for MS/MS analysis in the pooled DBS samples (QCbio) with the scan ranges of m/z 100 − 310 Da, 300 − 710 Da, and 700 − 1500 Da, respectively. The operating conditions of the mass spectrometer were as follows: 3500 V in positive mode and 3000 V in negative mode of ion spray voltage, 20 psi nebulizer, 400 sheath gas temperature, 10 L/min sheath gas flow, and 30 V normalized collision energy was used for MS/MS . Each sample was injected three times, and the average intensity was used to represent the relative intensity of the metabolites. Data processing and analysis The raw data acquire by GC-MS was first analyzed by using Leco ChromaTOF (version 5.40). Target Search of Bioconductor package in R (version 4.0.3) was used for peak detection, fatty acid methyl esters (FAME) based retention time, and mass spectral comparison with the Fiehn reference libraries . The identification of metabolite was further confirmed using manual inspection of chromatograms. UHPLC-MS chromatograms were processed by using Metanotitia Inc. in-house developed software PAppLine TM , supplemented by commercial software including Compound Discoverer 3.1 and LipidSearch (Thermo Fisher Scientific, Waltham, MA, USA). The first processing step was including peak detection, peak filtering, baseline correction and removing isotopic peaks . The lipids and metabolites were annotated based on the retention time, precursor ion and product ions fragmentation pattern by using the Metanotitia Inc. library, Ulib MS . In detail, the establishment of metabolite database via a sub-library of six thousand compounds, which were developed according to same chromatographic and spectrometric conditions as the measured samples, and the lipids were annotated with a sub-library of 1,700 lipids based on the precursor m/z, fragmentation spectrum, and elution patterns . The matching criteria were retention time within of 0.2 min and mass accuracy lower than 10 ppm. Then, the features that were detected in less than 60% of the DBS samples were remove for further analysis. The missing values were imputed by using Mice random forest algorithm according to the sample type and peak intensity . Finally, a calibration procedure including logarithm transformation and scaling was performed. Statistical analysis Multivariate statistical analysis was performed by using MetaboAnalyst 6.0. ( https://www.metaboanalyst.ca/ ). A log transformation based on 10 and pareto scalling were implemented to generate “Principal component analysis” (PCA) and “Partial least squares discriminant analysis” (PLS-DA) models. The stability of metabolites was determined by comparing the relative standard deviation (RSD) of metabolite intensities at each time point against control samples collected on Day zero, with an RSD less than 15% indicating stability. Conversely, the variability of RSD exceeding 15% was deemed unstable. The statistical analysis and Spearman correlation analysis of metabolites were performed to investigate the differences of metabolites by using Python software. The Venn and Barplot diagrams were generated using OmicStudio tools ( https://www.omicstudio.cn/tool ). The violin plot was generated using Hiplot tool ( https://hiplot.cn/ ). Hierarchical cluster analysis (HCA) heatmap analysis was conducted with MetaboAnalyst 6.0 using Euclidean distance measurement method and Ward clustering method to display the relative intensity changes of metabolites. Additionally, pathway enrichment analysis was also employed to further elucidate the characteristics of stable or unstable metabolites using MetaboAnalyst 6.0, respectively. MS-grade methanol, acetonitrile (ACN), formic acid, water, ammonium acetate, and HPLC-grade methyl tert-butyl ether (MTBE) were purchased from Sigma-Aldrich (St. Louis, MO, USA). HPLC-grade isopropanol (IPA) was obtained from Thermo Fisher Scientific Co., Ltd (Shanghai, China). Whatman 903TM protein saver cards, utilized for DBS sample preparation, were obtained from Whatman (Maidstone, UK). The internal standards, including gibberellic acid A3, 13 C sorbitol, and PE (17:0/17:0) were acquired from Sigma-Aldrich (St. Louis, MO, USA). For the DBS storage stability study, samples were all collected from a single individual simultaneously to minimize any potential biological variability. This study was approved by the Research Ethics Committee and the informed consents were obtained from the participant. Venipuncture was conducted on the subject’s cubital vein under the fasting condition. Following collection, 50 µL of whole blood was promptly transferred onto Whatman 903TM protein saver cards. Subsequently, all samples underwent complete air drying for four hours at room temperature. The DBS paper cards were then individually enclosed in zip-closure foil bags and stored at temperatures of 4℃, 25℃, and 40℃ for durations of three days, one week, two weeks, and three weeks. To ensure robustness and reliability, triplicate samples were collected for each temperature and time interval. Furthermore, control samples were obtained at the initial time point (0 day) and stored at -80℃. Additionally, samples were collected on 3, 7, 14, and 21 days from each storage temperature. DBS samples retrieved at each time point were preserved at -80℃ until analysis throughout the study. For each specified time point (3, 7, 14, and 21 days) and temperature (4℃, 25℃, and 40℃), samples were retrieved from their respective storage environments and promptly preserved at -80℃ until analysis. DBS underwent processing by excising four dried blood slices, each with a diameter of 3 mm from a single dried spot, which were then directly transferred into individual microcentrifuge tubes. DBS samples were extracted as previously reported . In brief, DBS slices were mixed with 700 µL MTBE buffer containing 0.45 µg/mL of gibberellic acid A3, 1 µg/mL of 13 C sorbitol, and 0.45 µg/mL of PE (17:0/17:0) as internal standards. Gibberellic acid A3 and PE (17:0/17:0) were relatively used as the internal standard of hydrophilic and lipophilic phase detection LC-MS platform. In addition, 13 C sorbitol was applied for GC-MS platform. Internal standards were utilized to monitor the stability of extraction and the on-board process of the MS platform. The coefficient of variation (CV) for each substance less than or equal to 20% was considered as stable. Then, the samples were sonicated for 15 min in a 4 °C bath. Subsequently, 350 µL solution (methanol/water, v/v, 1:3) was added to facilitate phase separation. The upper lipophilic phases and lower hydrophilic phases were separated by high-speed centrifugation (12,700 rpm, 5 min at 4 °C, Centrifuge 5430R, Eppendorf, Germany). Next, 400 µL of the lower hydrophilic phase was further mixed with 1.1 mL of methanol, incubated at 4 °C for 1 h, and then centrifuged at 12,700 rpm for 10 min at 4 °C. The upper phase containing the lipophilic phase (350 µL) was transferred to a microcentrifuge tube for lipid analysis. The hydrophilic phase was divided into two microcentrifuge tubes, one containing 350 µL and the other containing 1000 µL, intended for GC-MS and UHPLC-MS metabolite analysis, respectively. All aliquots were dried using a speed vacuum concentrator and stored at -80 °C. Before use, the dried samples of lipophilic and hydrophilic constituents were dissolved in 200 µL of ACN/IPA (v/v, 7:3) and water, respectively. Finally, they were transferred into sample vials for further analysis. To ensure measurement quality and equipment performance, three types of quality control (QC) samples were prepared according to the aforementioned procedure. These included a pooled sample consisting of 50% randomly selected DBS samples (QCbio), a mixture of chemical standards (QCmix), and a sample containing only solvents (QCblank). To identify impurities in the solvents or contamination in the separation system, the analytical batch run began with a QCblank. QCmix and QCbio samples were inserted after every ten biological samples during analysis. The derivatization of the dried hydrophilic metabolites was performed according to Lisec et al. . Briefly, the dried fractions were oximilised by methoxyammonium pyridine solution and derivatized by adding MSTFA for GC-MS analysis. The analysis of derivatization of the hydrophilic metabolites was conducted utilizing an Agilent 7890B gas chromatograph with an Rxi ® -5SilMS GC column (30 m, 0.25 × 30 mm, 0.25 μm), coupled to time-of-flight (TOF) mass spectrometer (Leco Corp., St. Joseph, MI, USA) with an electron ionization source by an Agilent 7683 series autosampler (Agilent Technologies GmbH, Waldbronn, Germany). High-purity helium was set as carrier gas with a flow rate of 1 mL/min. The temperature was initially set at 50 °C for 2 min, and then reaching 330 °C at a rate of 1 °C per minute. The ion source temperature and interface were set at 250 °C and 280 °C, respectively. The detector voltage and electron energy were set at 1.2KV and 70 eV, respectively. The analysis of lipophilic and hydrophilic extractions was performed using an ultra-high performance liquid chromatography (UHPLC, Waters ACQUITY (Milford, MA, USA) ) system coupled to Thermo-Fisher Q-Exactive (Bremen, Germany) mass spectrometers with an electrospray ionization (ESI) source under both positive and negative modes. For lipophilic samples, a BEH C8 column (1.7 μm, 2.1 × 100 mm) was used for chromatographic separation with a column temperature of 60 °C. Gradient elution was performed using a mobile phase A water and B ACN/IPA (70/30, V/V), both containing 0.1% acetic acid and 0.1% ammonium acetate at a flow rate of 0.4 mL/min. The optimal gradient elution program was set as follows: 55% B at 0–1 min, 75% B at 4 min, 89% B at 12 min, 100% B at 12 min, 100% B at 19.5 min, 55% B at 19.51 min, 55% B at 24 min. The injection volume of the sample was set at 2 µL with a temperature held at 10 °C. For hydrophilic samples, chromatographic separations were performed on a HSS T3 column (1.8 μm, 2.1 × 100 mm) with a column temperature of 40 °C. Gradient elution was carried out using mobile phase A water and B ACN, both containing 0.1% formic acid at a flow rate of 0.4 mL/min. The gradient elution program was operated as follows: 1% B at 0 min, 70% B at 13 min, 99% B at 13.01 min, 99% B at 18 min, 1% B at 18.01 min, and 1% B at 22 min. The injection volume of the sample was set at 3 µL with the temperature held at 10 °C. For both lipophilic and hydrophilic metabolite analyses, full scan and data-dependent acquisition (DDA) were employed to acquire MS and MS/MS spectra. Full scanning analyses were performed in the range of m/z 100–1500 Da. The DDA was carried out for targeting the top five most intense precursor ions for MS/MS analysis in the pooled DBS samples (QCbio) with the scan ranges of m/z 100 − 310 Da, 300 − 710 Da, and 700 − 1500 Da, respectively. The operating conditions of the mass spectrometer were as follows: 3500 V in positive mode and 3000 V in negative mode of ion spray voltage, 20 psi nebulizer, 400 sheath gas temperature, 10 L/min sheath gas flow, and 30 V normalized collision energy was used for MS/MS . Each sample was injected three times, and the average intensity was used to represent the relative intensity of the metabolites. The raw data acquire by GC-MS was first analyzed by using Leco ChromaTOF (version 5.40). Target Search of Bioconductor package in R (version 4.0.3) was used for peak detection, fatty acid methyl esters (FAME) based retention time, and mass spectral comparison with the Fiehn reference libraries . The identification of metabolite was further confirmed using manual inspection of chromatograms. UHPLC-MS chromatograms were processed by using Metanotitia Inc. in-house developed software PAppLine TM , supplemented by commercial software including Compound Discoverer 3.1 and LipidSearch (Thermo Fisher Scientific, Waltham, MA, USA). The first processing step was including peak detection, peak filtering, baseline correction and removing isotopic peaks . The lipids and metabolites were annotated based on the retention time, precursor ion and product ions fragmentation pattern by using the Metanotitia Inc. library, Ulib MS . In detail, the establishment of metabolite database via a sub-library of six thousand compounds, which were developed according to same chromatographic and spectrometric conditions as the measured samples, and the lipids were annotated with a sub-library of 1,700 lipids based on the precursor m/z, fragmentation spectrum, and elution patterns . The matching criteria were retention time within of 0.2 min and mass accuracy lower than 10 ppm. Then, the features that were detected in less than 60% of the DBS samples were remove for further analysis. The missing values were imputed by using Mice random forest algorithm according to the sample type and peak intensity . Finally, a calibration procedure including logarithm transformation and scaling was performed. Multivariate statistical analysis was performed by using MetaboAnalyst 6.0. ( https://www.metaboanalyst.ca/ ). A log transformation based on 10 and pareto scalling were implemented to generate “Principal component analysis” (PCA) and “Partial least squares discriminant analysis” (PLS-DA) models. The stability of metabolites was determined by comparing the relative standard deviation (RSD) of metabolite intensities at each time point against control samples collected on Day zero, with an RSD less than 15% indicating stability. Conversely, the variability of RSD exceeding 15% was deemed unstable. The statistical analysis and Spearman correlation analysis of metabolites were performed to investigate the differences of metabolites by using Python software. The Venn and Barplot diagrams were generated using OmicStudio tools ( https://www.omicstudio.cn/tool ). The violin plot was generated using Hiplot tool ( https://hiplot.cn/ ). Hierarchical cluster analysis (HCA) heatmap analysis was conducted with MetaboAnalyst 6.0 using Euclidean distance measurement method and Ward clustering method to display the relative intensity changes of metabolites. Additionally, pathway enrichment analysis was also employed to further elucidate the characteristics of stable or unstable metabolites using MetaboAnalyst 6.0, respectively. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
RetiSurge - Enabling “Dry Lab” vitreoretinal surgical training during COVID-19 pandemic
3a8cf298-2a2d-419e-8b42-c135afd63754
8012966
Ophthalmology[mh]
Model eye framework For 3D printing the RetiSurge model eye, an anatomically accurate sketch of the eyeball is incorporated into Computer-Aided Design (CAD) software with appropriate dimensions. The CAD software exports this model native file format into STL (Standard triangle language) file which is used for 3D printing. Information regarding material properties, dimensions, tolerance, and manufacturing process are incorporated in this file. The STL file is then converted into machine language i.e., G-code (through a process called “slicing”) which is recognized by 3D printer to create the model eye. The RetiSurge model consists of two interlocking hemispheres, the upper half is printed using TPU (Thermoplastic polyurethane) , a flexible material with tenacity similar to human sclera. The lower half is composed of plastic filament material i.e., PLA (Polylactic acid) . Preparation of retinal film A colored fundus image is printed on regular white A4 sheet and folded into a hemisphere using k comb wrapping. A 3D printed tracer is used to customize the fundus print out so that it fits perfectly into the eyeball model. For a realistic fundus image, printing on a polyethylene terephthalate glycol (PETG) sheet shaped by thermoforming technology can be an alternative . The film is placed in the lower half of the eyeball model. Alternately, a liquid skin bandage can be placed to practice membrane peeling. Assembly The RetiSurge model eye needs to be fixed in place for surgical practice. We used a microscope handle cover fitted into the hole made in a cardboard box . Using routine vitrectomy trocar canula sets, sclerotomies can be fashioned in the upper flexible half for introduction of an endoilluminator and second active instrument such as vitrectomy or endolaser probe. This assembly is used in conjunction with the operating microscope with a wide-angle visualization system to visualize the posterior segment. The technique has been demonstrated in the supplemental surgical . For 3D printing the RetiSurge model eye, an anatomically accurate sketch of the eyeball is incorporated into Computer-Aided Design (CAD) software with appropriate dimensions. The CAD software exports this model native file format into STL (Standard triangle language) file which is used for 3D printing. Information regarding material properties, dimensions, tolerance, and manufacturing process are incorporated in this file. The STL file is then converted into machine language i.e., G-code (through a process called “slicing”) which is recognized by 3D printer to create the model eye. The RetiSurge model consists of two interlocking hemispheres, the upper half is printed using TPU (Thermoplastic polyurethane) , a flexible material with tenacity similar to human sclera. The lower half is composed of plastic filament material i.e., PLA (Polylactic acid) . A colored fundus image is printed on regular white A4 sheet and folded into a hemisphere using k comb wrapping. A 3D printed tracer is used to customize the fundus print out so that it fits perfectly into the eyeball model. For a realistic fundus image, printing on a polyethylene terephthalate glycol (PETG) sheet shaped by thermoforming technology can be an alternative . The film is placed in the lower half of the eyeball model. Alternately, a liquid skin bandage can be placed to practice membrane peeling. The RetiSurge model eye needs to be fixed in place for surgical practice. We used a microscope handle cover fitted into the hole made in a cardboard box . Using routine vitrectomy trocar canula sets, sclerotomies can be fashioned in the upper flexible half for introduction of an endoilluminator and second active instrument such as vitrectomy or endolaser probe. This assembly is used in conjunction with the operating microscope with a wide-angle visualization system to visualize the posterior segment. The technique has been demonstrated in the supplemental surgical . We describe a simple, 3D printed model eye – RetiSurge for practice of basic steps of VR surgery. Visualization is the key to successful VR surgery. RetiSurge helps develop hand-eye coordination, orientation to wide-angle viewing system, and XY movements of microscope while maintaining focus intraocularly on the retinal film. By helping the trainee surgeon in developing spatial sense as a precursor to actual vitrectomy, the risk of inadvertent retinal touch during actual surgery can be minimized. VR surgery requires bimanual manipulations for good illumination of required locus of retina while avoiding glare from instruments, as well as maintaining a stable globe while approaching various quadrants of the eye. These maneuvers require control on the non-dominant hand. This dexterity can be achieved by training with RetiSurge. The RetiSurge with PETG film is excellent for endolaser practice. The laser spots resemble those on the human retina with power settings comparable with real-life scenario . The retinal film inside can be changed easily and multiple pre-cut or pre-printed images can be at hand for ease of use. The same model eye can be used numerous times by various trainee surgeons. The materials used in manufacturing this model eye are safe for sterilization by ethylene oxide. As there is no biological tissue used, a sterilized model Retisurge can be used inside the regular operating room for training. This enables use of microscope, wide-angle viewing system as well as vitrectomy machine and laser. The limitations of RetiSurge model eye include inability of practice all surgical maneuvers and it simulates the fundus view of an aphakic eye. Eyeball models for training have been previously described. An eyeball model created using a ping-pong ball for practice for Slit-lamp laser photocoagulation described by Ganne et al . is an excellent low-cost model that utilizes pre-printed fundus images on paper. The use of liquid skin bandage inside commercially available model eye for membrane peeling is also described. Ophthalmology is a medical speciality that largely deals with elective care. In the ongoing COVID -19 pandemic, there have been large reductions in ophthalmic practices as well as a cessation of elective surgeries. An online survey amongst trainee ophthalmologists revealed that 80.7% felt an impact on surgical training, with a perceived 50% reduction in surgical volume. In addition, 54.8% reported higher stress and 46.5% of respondents stated to be “feeling unhappy”. Challenging times call for innovations. In the new normal of a society that has to continue to function amidst a global pandemic, “Dry Lab” training is a way forward. With reduction in patient volumes and need for lesser surgical times, training can continue by an initial phase of simulation on the model eyes like RetiSurge. To conclude, RetiSurge is simple, cost-effective and reusable model for early VR Surgical training. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest. Nil. There are no conflicts of interest. www.ijo.in
The role of the estimand framework in the analysis of patient-reported outcomes in single-arm trials: a case study in oncology
f28ce5fe-0caa-46c5-9fc3-c50971ad4fd3
11585159
Internal Medicine[mh]
Patient-reported outcomes (PROs) play an increasingly important role in the evaluation of treatments . In the assessment of anti-cancer therapies, PROs are particularly relevant since the therapies are often aimed at prolonged survival and an improvement in quality of life (QoL) and/or symptom reduction. Regulatory authorities such as the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) have published guidelines on how to incorporate PROs in studies submitted for cancer therapy approval . Guidelines for the inclusion of PROs in trial protocols and scientific reporting guidelines for PROs in clinical trials have also been developed . Results from single-arm trials (SATs) are becoming more prominent in the regulatory approval for oncology medicines, especially in rare cancer types for which there is not yet an effective standard of care, late-stage cancer, and for targeted therapies suitable only to a subgroup of patients with a specific mutation. One-third of the trials involved in FDA approval of oncological therapies between 2014 and 2019 were single-arm studies . Although in some situations a carefully designed SAT is the most ethical or feasible option, results from SATs require careful interpretation because of the high risk of biases and the absence of concurrent control . This is a particular concern for PRO endpoints where dropout may be associated with the outcome measured. Careful interpretation of an estimate calls for a clearly defined target of estimation, also called an “estimand” by the International Council for Harmonization (ICH). The ICH have set out their “estimand framework” in an addendum (R1) to their guideline E9 on Statistical Considerations for Clinical Trials . The framework aligns the design and analysis of a trial with its aims. Fiero et al. have shown how the estimand framework may be used to translate a PRO-related research question into a fully defined estimand in a hypothetical randomized trial . In a recent literature review on SATs in oncology that reported PROs , only two of the included studies specified a research hypothesis for the PROs. Intercurrent events in relation to the analysis of PROs were not discussed in any of the studies. The collection of PROs often stopped after treatment discontinuation, limiting the possibilities of a ‘treatment policy’ or intention-to-treat analysis. Linear mixed models or other (implicit) imputation methods were used in many studies without acknowledging how the implicit imputation affected the interpretation of the results . The intercurrent event of death is of particular concern here since PROs after death do not exist and implicit imputation after death is counterintuitive. This case study was undertaken within the work package on SATs of the SISAQOL-IMI consortium, which aims to develop recommendations on design, analysis, presentation, and interpretation for PRO data in cancer clinical trials . Our objective was to demonstrate how the estimand framework can be implemented in SATs with PRO endpoints. Specifically, we focused on global health-related QoL measured in a SAT in non-small cell lung cancer . In this paper, we present a range of possible choices of estimand, corresponding statistical methods, and their implications retrospectively using anonymized data from a real single-arm cancer trial. In this section, we first briefly describe the design of the clinical trial used for illustration in this case study, in particular regarding PRO data collection. After touching on how missing PRO data were handled, we discuss the various estimands that were illustrated in this case study and the corresponding statistical methods for estimation. All statistical analyses were performed using R and analysis code is available as an online supplement. Trial design The single-arm, multicenter phase 2 trial evaluated the efficacy, safety, and tolerability of a new anticancer treatment in patients with locally advanced or metastatic anaplastic lymphoma kinase (ALK)-positive non-small cell lung cancer. The co-primary outcomes of the trial were objective tumor response (as per RECIST V1.1) and adverse events (using the National Cancer Institute Common Terminology Criteria for Adverse Events, V.4.0). In addition, several PROs were assessed as secondary endpoints, of which we focused on the overall QoL as measured by EORTC QLQ-C30 global QoL scale. We will refer to this PRO as “global QoL.” The global QoL scale of the EORTC QLQ-C30 was scored according to the EORTC scoring manual such that scores ranged from 0 (representing the worst) to 100 (best possible score). In the original study, a clinically relevant difference of 10 points in mean QoL compared to the mean at the start of protocol treatment was defined . The trial was conducted before the estimand framework was developed, and no strategy to deal with intercurrent events or missing PRO data was explicitly mentioned in the paper. Collection of PRO data in the trial While on trial medication, participants were asked to complete the EORTC QLQ-C30 questionnaire on the first day of protocol treatment and every three weeks while on protocol treatment. The tri-weekly intervals were aligned with the treatment cycles of chemotherapy, which was the standard of care for this disease setting. We therefore refer to the timing of PRO measurements with their cycle number and baseline is defined here as the first day of cycle 1. After cycle 10 (30 weeks), the study protocol allowed for completion of the questionnaire on the first day of alternate cycles (i.e., every six weeks). With respect to PRO data collection, three types of intercurrent events occurred in the study: progression of disease (PD), treatment discontinuation (TD, mostly due to disease progression), and death. Although disease progression was often followed by the discontinuation of treatment in the trial, it was left to the discretion of the physician and patient to decide when to stop the treatment. Upon discontinuation of trial medication, patients were asked to complete one final questionnaire, after which PRO data collection was ended, while only follow-up for overall survival continued. The sponsor anonymized the measurements before a subset was shared with us for this case study. Methods for missing PRO data Description of missing data For each cycle, we summarized the number of patients in each of the following six states: 1. alive, on treatment and QoL available; 2. alive, on treatment and QoL not available; 3. alive, off treatment and QoL available (this was extremely rare); 4. alive, off treatment and QoL not available; 5. lost to follow-up for overall survival (and QoL); 6. deceased. In the rare case when there was more than one PRO measurement reported by the same patient in one cycle, we averaged the patient’s PRO measurements in that cycle . As intercurrent events play an important role in the definition of an estimand, the availability of PROs before and after intercurrent events was also analyzed descriptively. Imputation of missing data In line with our illustrative aim, we created one (reasonably realistic) complete dataset in which the implementation of the estimand framework could be studied. To this end, we applied a single imputation method (Appendix A.1). We observed a general drop in PRO scores in the last five cycles before death, whereas no such drop was observed before censoring. The progression of disease and the decision to discontinue treatment may be related to patients’ QoL trajectories as well. We therefore assumed that the time-distance to death and other intercurrent events was relevant to the missing PROs at each cycle. We imputed missing PRO data using single imputation (see Appendix A.1 for details) for each participant until cycle 40 or death, whichever occurred first, under the assumption that the PROs at each cycle were missing at random conditional on the cycle number, available QoL measurements at other cycles, death, PD and TD, and the time until these events. Missing values were imputed before and after PD and/or TD, but not after death. We assumed non-informative censoring in our analyses, as most censoring was administrative at study end. Applying the estimand framework in our case study We introduced various estimands (i.e., targets of estimation) for describing global QoL over time from the start of protocol treatment. As defined in ICH E9-R1, an estimand has five attributes: the treatment, the population, the variable of interest, the population-level summary, and a strategy for handling intercurrent events . In this case study, the assigned treatment was the same trial medication for all participants, and we used the in- and exclusion criteria of the original trial to define our target population of ALK positive non-small cell lung cancer patients. Appendix A.2 provides a general discussion on defining the variable, the population summary and the handling of intercurrent events for PROs in a SAT. Below, we outline the estimands that were illustrated in this case study specifically, as well as corresponding statistical methods for estimation (for a schematic overview see also Table ). Defining the variable and the population summary To illustrate the different variables of interest, we computed summaries of the numerical value of the PRO, the change from baseline, and a responder/non-responder classification at each cycle using the raw, unimputed data. We opted for the absolute numerical value of the PRO for subsequent analyses . Because of the limited availability of the PRO data in later cycles, we restricted our analyses to cycles 1–40 (months 0–27). Furthermore, we opted for the mean QoL value at each cycle as the population summary. Since the distribution of observed QoL was reasonably symmetric in exploratory analyses, the mean was deemed to be an appropriate summary. For a range of intercurrent event strategies, we applied a corresponding analysis (outlined below) to show how results and interpretations differ. Strategies to deal with death In our illustrations of strategies to handle death in the analyses, we used all (imputed) data after TD and PD, regardless of whether TD and PD had occurred, following a treatment policy strategy. For transparency, we provided survival estimates with our global QoL estimates, as well as estimates of the probability of remaining progression-free and of remaining on protocol treatment. While alive strategy First, for each cycle, we estimated the mean QoL in the patients who were still alive in that cycle. Note that over time the group in which the means are calculated becomes smaller due to mortality and censoring, and may have a different distribution of characteristics than the group who is alive at the first cycle. Therefore, we provided (Kaplan–Meier) estimates of survival with the estimated means while alive. Essentially, we are interested in a bivariate outcome here: survival and QoL conditional on survival. The while alive estimates were obtained in two ways: (1) using generalized estimating equations (GEE) with an independence correlation structure, which have been shown to allow direct modelling of means over time conditional on survival status , and (2) modelling the individual QoL values over time with a linear mixed model (LMM) followed by averaging individually predicted QoL values only over those alive at each cycle . A GEE analysis with an independence correlation structure, with time as a categorical variable and without any approaches to handle missing data will yield the same means over time as a purely descriptive approach. So, this GEE approach is in line with a descriptive aim. In addition, the GEE approach models the means over time only, whereas the LMM provides individual predictions that we averaged over the alive subject afterwards (since using marginal means from the model directly would correspond to a hypothetical strategy, see below). Composite strategy As an example of a composite strategy for death, all global QoL values after death were set to 0, a value chosen somewhat arbitrarily for the sake of illustration here (cf. EQ-5D, a health utility score where a value of 0 corresponds to death ). The means of this composite outcome were estimated using GEE as above. Whether it makes sense to put global QoL and death on the same scale, and to assign a value of 0 to death on this scale, is debatable. Different choices of the QoL variable of interest allow for other ways to define a composite endpoint of QoL and death. For example, if a responder analysis is performed, death might be included in the definition of nonresponse. In an analysis of time-till-deterioration, deterioration may be defined as a drop in QoL or death. However, such analyses have their limitations as discussed in Appendix A.2 . Hypothetical strategy Finally, we estimated the mean QoL over time in a hypothetical situation where all patients would remain alive until at least cycle 40. Two LMMs including the cycle number as a categorical variable and 1) a random intercept only, or 2) a random intercept and slope, and were fitted to the available data. Subsequently, marginal means for cycles 1 through 40 were obtained from the fitted models. These models assume that the QoL values of patients who are no longer alive at a particular cycle are missing at random conditional on observed QoL values and the cycle number. Under this assumption, LMMs (implicitly) extrapolate QoL trajectories for each patient after their death to model a hypothetical scenario assuming no deaths in the trial. Strategies to deal with treatment discontinuation Various strategies to handle TD in PRO data analysis were applied along with strategies to handle death described above. Progression of disease was ignored here using a treatment policy strategy for PD. We did not define a numerical composite outcome for TD, as associating a single global QoL score with TD did not seem reasonable. While on treatment First, we implemented a while on treatment strategy, which implies a while alive strategy since treatment does not continue after death. The mean QoL at each cycle (while on treatment) was estimated by removing all observations after treatment discontinuation from the data and fitting a GEE with independence correlation structure to the remaining data. Hypothetical strategies Additionally, two hypothetical strategies were illustrated. For both strategies, an LMM was fitted to all data before TD. For the first strategy, the model’s predictions were averaged over all patients at cycles 1–40. This resulted in estimated mean QoL in the hypothetical situation assuming no treatment discontinuation or death in the study. For the second strategy, we averaged the same model’s predictions over the subset of patients who were still alive at each respective cycle. This was intended to estimate the mean QoL while alive, in the hypothetical situation where treatment discontinuation did not occur before death within the study. Treatment policy strategy Finally, we applied a treatment policy strategy to TD, while alive. Here, we used the data with (imputed) measurements after TD and applied GEE to estimate the mean QoL while alive, regardless of TD. This is the same analysis as for the while alive method described previously. As almost no data were available after TD, these estimates were mostly based on imputed data at later cycles. Strategies to deal with disease progression Subsequently, we set the strategy for TD to a treatment policy strategy and shifted our focus to the intercurrent event of disease progression (PD). We defined intercurrent event strategies for PD analogously to those for TD. The single-arm, multicenter phase 2 trial evaluated the efficacy, safety, and tolerability of a new anticancer treatment in patients with locally advanced or metastatic anaplastic lymphoma kinase (ALK)-positive non-small cell lung cancer. The co-primary outcomes of the trial were objective tumor response (as per RECIST V1.1) and adverse events (using the National Cancer Institute Common Terminology Criteria for Adverse Events, V.4.0). In addition, several PROs were assessed as secondary endpoints, of which we focused on the overall QoL as measured by EORTC QLQ-C30 global QoL scale. We will refer to this PRO as “global QoL.” The global QoL scale of the EORTC QLQ-C30 was scored according to the EORTC scoring manual such that scores ranged from 0 (representing the worst) to 100 (best possible score). In the original study, a clinically relevant difference of 10 points in mean QoL compared to the mean at the start of protocol treatment was defined . The trial was conducted before the estimand framework was developed, and no strategy to deal with intercurrent events or missing PRO data was explicitly mentioned in the paper. While on trial medication, participants were asked to complete the EORTC QLQ-C30 questionnaire on the first day of protocol treatment and every three weeks while on protocol treatment. The tri-weekly intervals were aligned with the treatment cycles of chemotherapy, which was the standard of care for this disease setting. We therefore refer to the timing of PRO measurements with their cycle number and baseline is defined here as the first day of cycle 1. After cycle 10 (30 weeks), the study protocol allowed for completion of the questionnaire on the first day of alternate cycles (i.e., every six weeks). With respect to PRO data collection, three types of intercurrent events occurred in the study: progression of disease (PD), treatment discontinuation (TD, mostly due to disease progression), and death. Although disease progression was often followed by the discontinuation of treatment in the trial, it was left to the discretion of the physician and patient to decide when to stop the treatment. Upon discontinuation of trial medication, patients were asked to complete one final questionnaire, after which PRO data collection was ended, while only follow-up for overall survival continued. The sponsor anonymized the measurements before a subset was shared with us for this case study. Description of missing data For each cycle, we summarized the number of patients in each of the following six states: 1. alive, on treatment and QoL available; 2. alive, on treatment and QoL not available; 3. alive, off treatment and QoL available (this was extremely rare); 4. alive, off treatment and QoL not available; 5. lost to follow-up for overall survival (and QoL); 6. deceased. In the rare case when there was more than one PRO measurement reported by the same patient in one cycle, we averaged the patient’s PRO measurements in that cycle . As intercurrent events play an important role in the definition of an estimand, the availability of PROs before and after intercurrent events was also analyzed descriptively. Imputation of missing data In line with our illustrative aim, we created one (reasonably realistic) complete dataset in which the implementation of the estimand framework could be studied. To this end, we applied a single imputation method (Appendix A.1). We observed a general drop in PRO scores in the last five cycles before death, whereas no such drop was observed before censoring. The progression of disease and the decision to discontinue treatment may be related to patients’ QoL trajectories as well. We therefore assumed that the time-distance to death and other intercurrent events was relevant to the missing PROs at each cycle. We imputed missing PRO data using single imputation (see Appendix A.1 for details) for each participant until cycle 40 or death, whichever occurred first, under the assumption that the PROs at each cycle were missing at random conditional on the cycle number, available QoL measurements at other cycles, death, PD and TD, and the time until these events. Missing values were imputed before and after PD and/or TD, but not after death. We assumed non-informative censoring in our analyses, as most censoring was administrative at study end. For each cycle, we summarized the number of patients in each of the following six states: 1. alive, on treatment and QoL available; 2. alive, on treatment and QoL not available; 3. alive, off treatment and QoL available (this was extremely rare); 4. alive, off treatment and QoL not available; 5. lost to follow-up for overall survival (and QoL); 6. deceased. In the rare case when there was more than one PRO measurement reported by the same patient in one cycle, we averaged the patient’s PRO measurements in that cycle . As intercurrent events play an important role in the definition of an estimand, the availability of PROs before and after intercurrent events was also analyzed descriptively. In line with our illustrative aim, we created one (reasonably realistic) complete dataset in which the implementation of the estimand framework could be studied. To this end, we applied a single imputation method (Appendix A.1). We observed a general drop in PRO scores in the last five cycles before death, whereas no such drop was observed before censoring. The progression of disease and the decision to discontinue treatment may be related to patients’ QoL trajectories as well. We therefore assumed that the time-distance to death and other intercurrent events was relevant to the missing PROs at each cycle. We imputed missing PRO data using single imputation (see Appendix A.1 for details) for each participant until cycle 40 or death, whichever occurred first, under the assumption that the PROs at each cycle were missing at random conditional on the cycle number, available QoL measurements at other cycles, death, PD and TD, and the time until these events. Missing values were imputed before and after PD and/or TD, but not after death. We assumed non-informative censoring in our analyses, as most censoring was administrative at study end. We introduced various estimands (i.e., targets of estimation) for describing global QoL over time from the start of protocol treatment. As defined in ICH E9-R1, an estimand has five attributes: the treatment, the population, the variable of interest, the population-level summary, and a strategy for handling intercurrent events . In this case study, the assigned treatment was the same trial medication for all participants, and we used the in- and exclusion criteria of the original trial to define our target population of ALK positive non-small cell lung cancer patients. Appendix A.2 provides a general discussion on defining the variable, the population summary and the handling of intercurrent events for PROs in a SAT. Below, we outline the estimands that were illustrated in this case study specifically, as well as corresponding statistical methods for estimation (for a schematic overview see also Table ). Defining the variable and the population summary To illustrate the different variables of interest, we computed summaries of the numerical value of the PRO, the change from baseline, and a responder/non-responder classification at each cycle using the raw, unimputed data. We opted for the absolute numerical value of the PRO for subsequent analyses . Because of the limited availability of the PRO data in later cycles, we restricted our analyses to cycles 1–40 (months 0–27). Furthermore, we opted for the mean QoL value at each cycle as the population summary. Since the distribution of observed QoL was reasonably symmetric in exploratory analyses, the mean was deemed to be an appropriate summary. For a range of intercurrent event strategies, we applied a corresponding analysis (outlined below) to show how results and interpretations differ. Strategies to deal with death In our illustrations of strategies to handle death in the analyses, we used all (imputed) data after TD and PD, regardless of whether TD and PD had occurred, following a treatment policy strategy. For transparency, we provided survival estimates with our global QoL estimates, as well as estimates of the probability of remaining progression-free and of remaining on protocol treatment. To illustrate the different variables of interest, we computed summaries of the numerical value of the PRO, the change from baseline, and a responder/non-responder classification at each cycle using the raw, unimputed data. We opted for the absolute numerical value of the PRO for subsequent analyses . Because of the limited availability of the PRO data in later cycles, we restricted our analyses to cycles 1–40 (months 0–27). Furthermore, we opted for the mean QoL value at each cycle as the population summary. Since the distribution of observed QoL was reasonably symmetric in exploratory analyses, the mean was deemed to be an appropriate summary. For a range of intercurrent event strategies, we applied a corresponding analysis (outlined below) to show how results and interpretations differ. In our illustrations of strategies to handle death in the analyses, we used all (imputed) data after TD and PD, regardless of whether TD and PD had occurred, following a treatment policy strategy. For transparency, we provided survival estimates with our global QoL estimates, as well as estimates of the probability of remaining progression-free and of remaining on protocol treatment. First, for each cycle, we estimated the mean QoL in the patients who were still alive in that cycle. Note that over time the group in which the means are calculated becomes smaller due to mortality and censoring, and may have a different distribution of characteristics than the group who is alive at the first cycle. Therefore, we provided (Kaplan–Meier) estimates of survival with the estimated means while alive. Essentially, we are interested in a bivariate outcome here: survival and QoL conditional on survival. The while alive estimates were obtained in two ways: (1) using generalized estimating equations (GEE) with an independence correlation structure, which have been shown to allow direct modelling of means over time conditional on survival status , and (2) modelling the individual QoL values over time with a linear mixed model (LMM) followed by averaging individually predicted QoL values only over those alive at each cycle . A GEE analysis with an independence correlation structure, with time as a categorical variable and without any approaches to handle missing data will yield the same means over time as a purely descriptive approach. So, this GEE approach is in line with a descriptive aim. In addition, the GEE approach models the means over time only, whereas the LMM provides individual predictions that we averaged over the alive subject afterwards (since using marginal means from the model directly would correspond to a hypothetical strategy, see below). As an example of a composite strategy for death, all global QoL values after death were set to 0, a value chosen somewhat arbitrarily for the sake of illustration here (cf. EQ-5D, a health utility score where a value of 0 corresponds to death ). The means of this composite outcome were estimated using GEE as above. Whether it makes sense to put global QoL and death on the same scale, and to assign a value of 0 to death on this scale, is debatable. Different choices of the QoL variable of interest allow for other ways to define a composite endpoint of QoL and death. For example, if a responder analysis is performed, death might be included in the definition of nonresponse. In an analysis of time-till-deterioration, deterioration may be defined as a drop in QoL or death. However, such analyses have their limitations as discussed in Appendix A.2 . Finally, we estimated the mean QoL over time in a hypothetical situation where all patients would remain alive until at least cycle 40. Two LMMs including the cycle number as a categorical variable and 1) a random intercept only, or 2) a random intercept and slope, and were fitted to the available data. Subsequently, marginal means for cycles 1 through 40 were obtained from the fitted models. These models assume that the QoL values of patients who are no longer alive at a particular cycle are missing at random conditional on observed QoL values and the cycle number. Under this assumption, LMMs (implicitly) extrapolate QoL trajectories for each patient after their death to model a hypothetical scenario assuming no deaths in the trial. Strategies to deal with treatment discontinuation Various strategies to handle TD in PRO data analysis were applied along with strategies to handle death described above. Progression of disease was ignored here using a treatment policy strategy for PD. We did not define a numerical composite outcome for TD, as associating a single global QoL score with TD did not seem reasonable. Various strategies to handle TD in PRO data analysis were applied along with strategies to handle death described above. Progression of disease was ignored here using a treatment policy strategy for PD. We did not define a numerical composite outcome for TD, as associating a single global QoL score with TD did not seem reasonable. First, we implemented a while on treatment strategy, which implies a while alive strategy since treatment does not continue after death. The mean QoL at each cycle (while on treatment) was estimated by removing all observations after treatment discontinuation from the data and fitting a GEE with independence correlation structure to the remaining data. Additionally, two hypothetical strategies were illustrated. For both strategies, an LMM was fitted to all data before TD. For the first strategy, the model’s predictions were averaged over all patients at cycles 1–40. This resulted in estimated mean QoL in the hypothetical situation assuming no treatment discontinuation or death in the study. For the second strategy, we averaged the same model’s predictions over the subset of patients who were still alive at each respective cycle. This was intended to estimate the mean QoL while alive, in the hypothetical situation where treatment discontinuation did not occur before death within the study. Finally, we applied a treatment policy strategy to TD, while alive. Here, we used the data with (imputed) measurements after TD and applied GEE to estimate the mean QoL while alive, regardless of TD. This is the same analysis as for the while alive method described previously. As almost no data were available after TD, these estimates were mostly based on imputed data at later cycles. Strategies to deal with disease progression Subsequently, we set the strategy for TD to a treatment policy strategy and shifted our focus to the intercurrent event of disease progression (PD). We defined intercurrent event strategies for PD analogously to those for TD. Subsequently, we set the strategy for TD to a treatment policy strategy and shifted our focus to the intercurrent event of disease progression (PD). We defined intercurrent event strategies for PD analogously to those for TD. Inclusion and follow-up A total of 876 patients from the lung cancer SAT with at least one QoL or clinical measurement were included in our analysis. This excludes patients from participating centers not allowing the use of data for this purpose. The median [IQR] follow-up time in those censored for overall survival was 41.8 [28.1–47.3] months. Most censoring (72%) was near the data collection cut-off date. We therefore assume that most censoring (for overall survival) was administrative censoring and uninformative censoring was likely to hold. In a multivariable Cox regression, censoring was not associated with sex, age, baseline ECOG performance status or the number of previous therapies. Description of clinical characteristics Demographic and clinical characteristics of the study participants in our data were summarized (Appendix A.3 Table S1). Death was observed in 576 (66%) patients. The Kaplan–Meier estimate of median survival time was 21.7 months [95% CI 19.8–24.2], with the probability of survival of 67% [95% CI 64%-71%] and 47% [95% CI 43%-50%] after one and two years of the start of protocol treatment (A.3 Figure S2). Description of global quality of life At baseline, 834 (95%) of the patients filled in the QoL questionnaire. The mean (SD) global QoL was 53.7 (25.2) at cycle 1. A positive association between QoL at baseline and overall survival was observed (A.3 Figure S3, p < 0.001 for log-rank test where participants were stratified based on four equal-length intervals of baseline QoL). Description and imputation of missing PRO data Availability of PRO data The number of available PROs reduced to 538 (61%) by cycle 10 and 221 (25%) by cycle 20. In cycle 40, when the estimated survival probability was 43%, only 61 patients (7%) were still alive and completed the QoL questionnaire (Fig. ). For this reason, we restricted our further analyses to cycles 1–40. The median time between the last available PRO and death observed during follow-up was 2.7 months (A.3 Figure S4). PRO data was available until less than 1 month before death for 140 patients (24% of observed deaths), and less than 3 months before death for 315 patients (55%). PRO measurements were mostly available until shortly before censoring of the survival time: the median [IQR] time between the last available PRO and censoring was 1.59 [0.62–16.24] months. PROs were collected until PD for most patients in whom PD was recorded ( n = 648). After PD, 345 patients continued the trial medication for at least one month and 241 continued for 3 months or more. Most (89%) patients reported a PRO measurement at the discontinuation of protocol treatment. Eight patients had two PRO measurements post-discontinuation, and one patient had three PRO-measurements after they discontinued treatment. Imputation After imputation, the mean PRO in those alive at each cycle was slightly lower than in the available data (A.1, Figure S1). Intercurrent events such as PD and (being close to) death may lead to missing PROs and be associated with lower QoL. As our imputation model takes such events into account, we would indeed expect global QoL in the imputed dataset to be somewhat lower on average than in the available data, particularly at later cycles when more participants have experienced PD and/or are in the final weeks of their life. Illustration of the estimand framework Below we show results corresponding to the estimands defined in Sect. " ". Our focus is on the illustration of different variables of interest and on strategies to deal with intercurrent events, in particular death, TD and PD. Defining the variable of interest: illustration Our variable of interest was the absolute numerical value of the PRO. For illustration, we provide summaries of three possible variables of interest in our raw data: the absolute numerical value, the magnitude of change from baseline and a binary responder/non-responder classification at each cycle (Fig. ). The absolute (numerical or ordinal) values of the PRO The mean reported global QoL at the beginning was 54 ( n = 834), increased to 67 ( n = 654) in cycle 4 and then appeared relatively constant until cycle 40. Note that the population with available measurements shrinks over time from death and drop-out, mainly due to TD and PD. Magnitude of change from baseline A downward trend in the mean change from baseline was visible after cycle 6 within the available data. These results illustrate that when the mean QoL in those alive remains constant, and there is a selection process over time where patients with high starting values live longer, the mean change from baseline in those alive will decrease over time. Regarding floor and ceiling effects, 33 patients (4.0% of available baseline measurements) were at the lowest possible QoL level at baseline, whereas 43 patients (5.2%) had the maximum possible global QoL score at baseline. The patients with the minimum possible QoL at baseline cannot have a negative change by definition. At the same time, patients at the top of the global QoL scale at the start can never have a positive change. This is important to consider when interpreting the mean change from baseline. Responder/non-responder classification As an example, we defined response at each cycle as having an increase in at least 10 points in QoL compared to baseline. A responder definition based on the magnitude of change from baseline may also suffer from ceiling effects: in our example, participants with baseline QoL values of 91 or higher could never be classified as a responder. The proportion of responders at each cycle within those patients who reported PROs showed an initial increase in the first four cycles and then a gradual downward trend. Strategies for dealing with intercurrent events For the intercurrent event strategies defined in Sect. " ", we estimated the corresponding mean global QoL at each cycle within the imputed dataset. The various intercurrent event strategies resulted in diverging estimates, each with their own interpretation. Death The estimated mean QoL while alive increased after treatment initiation and showed a slight decreasing trend at later cycles (Fig. ). The GEE and the LMM with post-hoc averaging yielded very similar results in this case. The composite estimates decrease rapidly after cycle 4. This is due to death dominating the composite outcome with an increasing proportion of patients with a QoL of 0 in the dataset. Both hypothetical strategies resulted in estimated mean QoL values below the while alive estimates. During the hypothetically extended part of the participants’ lives, their average QoL at each cycle was estimated to be lower than the average QoL of participants who were alive in the same cycle. The model with the random intercept and slope resulted in lower mean QoL estimates than the model with a random intercept only. The (linear) random slope model fitted the data better (difference in AIC: 1510, p-value for likelihood ratio test: < 0.0001). Models with random effects for flexible spline functions of the cycle number were unstable. Hence we could not test for nonlinear effects. Both models aim for the same hypothetical estimand for death and both models extrapolate QoL after death. Yet how the models extrapolate is determined by the model specification. No major differences in CI width occurred between the various analysis methods used. Regarding statistical efficiency, it is difficult to compare estimation methods that correspond to diverging estimands, as these methods are not targeting the same quantity. However, we observed no major differences in standard error magnitude between the analysis methods illustrated in this subsection (A.3 Figure S5). Treatment discontinuation The estimated mean QoL while on treatment at each cycle was slightly higher than the while alive estimates ignoring treatment discontinuation (Fig. ). This reflects that QoL likely decreases to some extent after TD, as drivers of TD may also lead to decreasing QoL. The estimated means in a hypothetical situation without treatment discontinuation, while alive were higher than the means while alive (treatment policy). The estimated mean global QoL under a hypothetical strategy for both TD and death is lower than the while alive estimate for each cycle. Confidence interval widths were similar, although the estimates for the treatment policy (while alive) strategy were somewhat more precise than the others (A.3 Figure S6). This could be due to the treatment policy estimates using all (imputed) data up to death, whereas the other estimates are based on data from before TD only. Disease progression Comparing the resulting estimates for disease progression (Fig. ) to those for treatment discontinuation (Fig. ), we note that QoL in the hypothetical world where PD does not occur in the trial is estimated to be higher than in the hypothetical world where TD does not occur. PD usually occurs earlier than TD in our dataset. Any drop in QoL after PD but before TD that is observed in the data is not taken into account by the linear mixed model in the first hypothetical scenario, since it is only fitted on observed values before PD. A total of 876 patients from the lung cancer SAT with at least one QoL or clinical measurement were included in our analysis. This excludes patients from participating centers not allowing the use of data for this purpose. The median [IQR] follow-up time in those censored for overall survival was 41.8 [28.1–47.3] months. Most censoring (72%) was near the data collection cut-off date. We therefore assume that most censoring (for overall survival) was administrative censoring and uninformative censoring was likely to hold. In a multivariable Cox regression, censoring was not associated with sex, age, baseline ECOG performance status or the number of previous therapies. Demographic and clinical characteristics of the study participants in our data were summarized (Appendix A.3 Table S1). Death was observed in 576 (66%) patients. The Kaplan–Meier estimate of median survival time was 21.7 months [95% CI 19.8–24.2], with the probability of survival of 67% [95% CI 64%-71%] and 47% [95% CI 43%-50%] after one and two years of the start of protocol treatment (A.3 Figure S2). At baseline, 834 (95%) of the patients filled in the QoL questionnaire. The mean (SD) global QoL was 53.7 (25.2) at cycle 1. A positive association between QoL at baseline and overall survival was observed (A.3 Figure S3, p < 0.001 for log-rank test where participants were stratified based on four equal-length intervals of baseline QoL). Availability of PRO data The number of available PROs reduced to 538 (61%) by cycle 10 and 221 (25%) by cycle 20. In cycle 40, when the estimated survival probability was 43%, only 61 patients (7%) were still alive and completed the QoL questionnaire (Fig. ). For this reason, we restricted our further analyses to cycles 1–40. The median time between the last available PRO and death observed during follow-up was 2.7 months (A.3 Figure S4). PRO data was available until less than 1 month before death for 140 patients (24% of observed deaths), and less than 3 months before death for 315 patients (55%). PRO measurements were mostly available until shortly before censoring of the survival time: the median [IQR] time between the last available PRO and censoring was 1.59 [0.62–16.24] months. PROs were collected until PD for most patients in whom PD was recorded ( n = 648). After PD, 345 patients continued the trial medication for at least one month and 241 continued for 3 months or more. Most (89%) patients reported a PRO measurement at the discontinuation of protocol treatment. Eight patients had two PRO measurements post-discontinuation, and one patient had three PRO-measurements after they discontinued treatment. Imputation After imputation, the mean PRO in those alive at each cycle was slightly lower than in the available data (A.1, Figure S1). Intercurrent events such as PD and (being close to) death may lead to missing PROs and be associated with lower QoL. As our imputation model takes such events into account, we would indeed expect global QoL in the imputed dataset to be somewhat lower on average than in the available data, particularly at later cycles when more participants have experienced PD and/or are in the final weeks of their life. The number of available PROs reduced to 538 (61%) by cycle 10 and 221 (25%) by cycle 20. In cycle 40, when the estimated survival probability was 43%, only 61 patients (7%) were still alive and completed the QoL questionnaire (Fig. ). For this reason, we restricted our further analyses to cycles 1–40. The median time between the last available PRO and death observed during follow-up was 2.7 months (A.3 Figure S4). PRO data was available until less than 1 month before death for 140 patients (24% of observed deaths), and less than 3 months before death for 315 patients (55%). PRO measurements were mostly available until shortly before censoring of the survival time: the median [IQR] time between the last available PRO and censoring was 1.59 [0.62–16.24] months. PROs were collected until PD for most patients in whom PD was recorded ( n = 648). After PD, 345 patients continued the trial medication for at least one month and 241 continued for 3 months or more. Most (89%) patients reported a PRO measurement at the discontinuation of protocol treatment. Eight patients had two PRO measurements post-discontinuation, and one patient had three PRO-measurements after they discontinued treatment. After imputation, the mean PRO in those alive at each cycle was slightly lower than in the available data (A.1, Figure S1). Intercurrent events such as PD and (being close to) death may lead to missing PROs and be associated with lower QoL. As our imputation model takes such events into account, we would indeed expect global QoL in the imputed dataset to be somewhat lower on average than in the available data, particularly at later cycles when more participants have experienced PD and/or are in the final weeks of their life. Below we show results corresponding to the estimands defined in Sect. " ". Our focus is on the illustration of different variables of interest and on strategies to deal with intercurrent events, in particular death, TD and PD. Defining the variable of interest: illustration Our variable of interest was the absolute numerical value of the PRO. For illustration, we provide summaries of three possible variables of interest in our raw data: the absolute numerical value, the magnitude of change from baseline and a binary responder/non-responder classification at each cycle (Fig. ). The absolute (numerical or ordinal) values of the PRO The mean reported global QoL at the beginning was 54 ( n = 834), increased to 67 ( n = 654) in cycle 4 and then appeared relatively constant until cycle 40. Note that the population with available measurements shrinks over time from death and drop-out, mainly due to TD and PD. Magnitude of change from baseline A downward trend in the mean change from baseline was visible after cycle 6 within the available data. These results illustrate that when the mean QoL in those alive remains constant, and there is a selection process over time where patients with high starting values live longer, the mean change from baseline in those alive will decrease over time. Regarding floor and ceiling effects, 33 patients (4.0% of available baseline measurements) were at the lowest possible QoL level at baseline, whereas 43 patients (5.2%) had the maximum possible global QoL score at baseline. The patients with the minimum possible QoL at baseline cannot have a negative change by definition. At the same time, patients at the top of the global QoL scale at the start can never have a positive change. This is important to consider when interpreting the mean change from baseline. Responder/non-responder classification As an example, we defined response at each cycle as having an increase in at least 10 points in QoL compared to baseline. A responder definition based on the magnitude of change from baseline may also suffer from ceiling effects: in our example, participants with baseline QoL values of 91 or higher could never be classified as a responder. The proportion of responders at each cycle within those patients who reported PROs showed an initial increase in the first four cycles and then a gradual downward trend. Strategies for dealing with intercurrent events For the intercurrent event strategies defined in Sect. " ", we estimated the corresponding mean global QoL at each cycle within the imputed dataset. The various intercurrent event strategies resulted in diverging estimates, each with their own interpretation. Death The estimated mean QoL while alive increased after treatment initiation and showed a slight decreasing trend at later cycles (Fig. ). The GEE and the LMM with post-hoc averaging yielded very similar results in this case. The composite estimates decrease rapidly after cycle 4. This is due to death dominating the composite outcome with an increasing proportion of patients with a QoL of 0 in the dataset. Both hypothetical strategies resulted in estimated mean QoL values below the while alive estimates. During the hypothetically extended part of the participants’ lives, their average QoL at each cycle was estimated to be lower than the average QoL of participants who were alive in the same cycle. The model with the random intercept and slope resulted in lower mean QoL estimates than the model with a random intercept only. The (linear) random slope model fitted the data better (difference in AIC: 1510, p-value for likelihood ratio test: < 0.0001). Models with random effects for flexible spline functions of the cycle number were unstable. Hence we could not test for nonlinear effects. Both models aim for the same hypothetical estimand for death and both models extrapolate QoL after death. Yet how the models extrapolate is determined by the model specification. No major differences in CI width occurred between the various analysis methods used. Regarding statistical efficiency, it is difficult to compare estimation methods that correspond to diverging estimands, as these methods are not targeting the same quantity. However, we observed no major differences in standard error magnitude between the analysis methods illustrated in this subsection (A.3 Figure S5). Treatment discontinuation The estimated mean QoL while on treatment at each cycle was slightly higher than the while alive estimates ignoring treatment discontinuation (Fig. ). This reflects that QoL likely decreases to some extent after TD, as drivers of TD may also lead to decreasing QoL. The estimated means in a hypothetical situation without treatment discontinuation, while alive were higher than the means while alive (treatment policy). The estimated mean global QoL under a hypothetical strategy for both TD and death is lower than the while alive estimate for each cycle. Confidence interval widths were similar, although the estimates for the treatment policy (while alive) strategy were somewhat more precise than the others (A.3 Figure S6). This could be due to the treatment policy estimates using all (imputed) data up to death, whereas the other estimates are based on data from before TD only. Disease progression Comparing the resulting estimates for disease progression (Fig. ) to those for treatment discontinuation (Fig. ), we note that QoL in the hypothetical world where PD does not occur in the trial is estimated to be higher than in the hypothetical world where TD does not occur. PD usually occurs earlier than TD in our dataset. Any drop in QoL after PD but before TD that is observed in the data is not taken into account by the linear mixed model in the first hypothetical scenario, since it is only fitted on observed values before PD. Our variable of interest was the absolute numerical value of the PRO. For illustration, we provide summaries of three possible variables of interest in our raw data: the absolute numerical value, the magnitude of change from baseline and a binary responder/non-responder classification at each cycle (Fig. ). The mean reported global QoL at the beginning was 54 ( n = 834), increased to 67 ( n = 654) in cycle 4 and then appeared relatively constant until cycle 40. Note that the population with available measurements shrinks over time from death and drop-out, mainly due to TD and PD. A downward trend in the mean change from baseline was visible after cycle 6 within the available data. These results illustrate that when the mean QoL in those alive remains constant, and there is a selection process over time where patients with high starting values live longer, the mean change from baseline in those alive will decrease over time. Regarding floor and ceiling effects, 33 patients (4.0% of available baseline measurements) were at the lowest possible QoL level at baseline, whereas 43 patients (5.2%) had the maximum possible global QoL score at baseline. The patients with the minimum possible QoL at baseline cannot have a negative change by definition. At the same time, patients at the top of the global QoL scale at the start can never have a positive change. This is important to consider when interpreting the mean change from baseline. As an example, we defined response at each cycle as having an increase in at least 10 points in QoL compared to baseline. A responder definition based on the magnitude of change from baseline may also suffer from ceiling effects: in our example, participants with baseline QoL values of 91 or higher could never be classified as a responder. The proportion of responders at each cycle within those patients who reported PROs showed an initial increase in the first four cycles and then a gradual downward trend. For the intercurrent event strategies defined in Sect. " ", we estimated the corresponding mean global QoL at each cycle within the imputed dataset. The various intercurrent event strategies resulted in diverging estimates, each with their own interpretation. The estimated mean QoL while alive increased after treatment initiation and showed a slight decreasing trend at later cycles (Fig. ). The GEE and the LMM with post-hoc averaging yielded very similar results in this case. The composite estimates decrease rapidly after cycle 4. This is due to death dominating the composite outcome with an increasing proportion of patients with a QoL of 0 in the dataset. Both hypothetical strategies resulted in estimated mean QoL values below the while alive estimates. During the hypothetically extended part of the participants’ lives, their average QoL at each cycle was estimated to be lower than the average QoL of participants who were alive in the same cycle. The model with the random intercept and slope resulted in lower mean QoL estimates than the model with a random intercept only. The (linear) random slope model fitted the data better (difference in AIC: 1510, p-value for likelihood ratio test: < 0.0001). Models with random effects for flexible spline functions of the cycle number were unstable. Hence we could not test for nonlinear effects. Both models aim for the same hypothetical estimand for death and both models extrapolate QoL after death. Yet how the models extrapolate is determined by the model specification. No major differences in CI width occurred between the various analysis methods used. Regarding statistical efficiency, it is difficult to compare estimation methods that correspond to diverging estimands, as these methods are not targeting the same quantity. However, we observed no major differences in standard error magnitude between the analysis methods illustrated in this subsection (A.3 Figure S5). The estimated mean QoL while on treatment at each cycle was slightly higher than the while alive estimates ignoring treatment discontinuation (Fig. ). This reflects that QoL likely decreases to some extent after TD, as drivers of TD may also lead to decreasing QoL. The estimated means in a hypothetical situation without treatment discontinuation, while alive were higher than the means while alive (treatment policy). The estimated mean global QoL under a hypothetical strategy for both TD and death is lower than the while alive estimate for each cycle. Confidence interval widths were similar, although the estimates for the treatment policy (while alive) strategy were somewhat more precise than the others (A.3 Figure S6). This could be due to the treatment policy estimates using all (imputed) data up to death, whereas the other estimates are based on data from before TD only. Comparing the resulting estimates for disease progression (Fig. ) to those for treatment discontinuation (Fig. ), we note that QoL in the hypothetical world where PD does not occur in the trial is estimated to be higher than in the hypothetical world where TD does not occur. PD usually occurs earlier than TD in our dataset. Any drop in QoL after PD but before TD that is observed in the data is not taken into account by the linear mixed model in the first hypothetical scenario, since it is only fitted on observed values before PD. In this case study, we have outlined the meaning and impact of the estimand framework in the analysis of longitudinal PROs in a SAT. While the causal interpretation of SAT results remains challenging due to a high risk of biases, the estimand framework facilitates a clear definition of the aims and interpretation of a SAT analysis. Hence, the use of this framework mitigates some of the methodological issues previously observed for PROs in SATs, such as the lack of an explicit strategy to address intercurrent events . The results of our illustration show that decisions made in the estimand framework are not trivial. In particular, each intercurrent event handling strategy resulted in its own estimated QoL means over time, with a specific interpretation, suitable for different clinical research aims. The absolute numerical value of the PRO as the variable of interest The absolute numerical value of the PRO, change scores and responder classification as possible endpoints of interest were illustrated on our dataset. The interpretation of change scores or responder/non-responder classifications may be obscured by floor and ceiling effects, a selection process due to death on baseline values, regression to the mean and the non-definitive nature of changes in the PRO (see Appendix A.2 for more discussion). Generally, the absolute numerical value of the PRO suffers from fewer drawbacks than other options and is the most direct representation of the data , especially for a descriptive study aim. A corresponding population summary might be the mean PRO at (a) prespecified time point(s). The while alive strategy and the use of LMMs and GEE When dealing with death in the analysis of PROs, a while alive strategy most closely represents the actual or observed situation. Any (implicit) imputation of PROs after death implies that these values are missing and observable in principle. However, PROs after death are neither observable nor defined. The assumption of a hypothetical world in which patients do not die during the study was far removed from reality in our case study. This hypothetical scenario may therefore not be clinically relevant, particularly in groups of patients where the mortality rate is high. Especially in a SAT context, where the aim is often descriptive, it makes sense to stay close to the experienced reality and use a while alive strategy combined with an estimate of the survival probability. In a randomized trial context, a drawback of the while alive strategy may be that differences in case-mix arise between the trial arms over time because of differential survival. However, this would also be the case in future implementations of the treatment in similar populations. Crucially, marginal effects from a standard linear mixed model of PROs over time rely on implicitly imputed PRO values after death. It is important for researchers to be aware that the use of such a model implies a hypothetical estimand regarding death. For other intercurrent events, fitting an LMM that does not account for intercurrent events may correspond to different estimands depending on data availability. For example, if there are no data after TD, fitting an LMM corresponds to a hypothetical strategy for TD and death, whereas the same model may estimate a treatment policy estimand for TD if all data after TD are available until death. Population means while alive can be obtained by averaging individual predictions from a fitted LMM over those still alive. If direct estimates of population level means are of interest in a while alive strategy, we recommend the use of GEE with an independence correlation structure . The appropriate analysis will depend on the trial objective and stakeholders. The treatment policy strategy for non-terminal intercurrent events For non-terminal intercurrent events such as treatment discontinuation, a treatment policy strategy seems a reasonable choice, as it aligns with the intention-to-treat principle. Often, it is relevant to know what to expect of a treatment, even after it is discontinued. Of course, this depends on the trial aim, for example, a while on treatment strategy may be appropriate for a tolerability objective. A treatment policy strategy for TD and PD requires data collection after these events, which is often limited as in our case study. After discontinuation of the trial treatment, patients may transfer to a different treatment center and/or enter another trial. This complicates the observation of PROs after TD, especially in single-arm trials and rare diseases. In this paper, we showed analyses where a ‘treatment policy’-strategy was always applied to TD or PD (or both). Of course, other combinations are possible, and the combination of strategies should be defined carefully. The composite strategy A composite outcome of a PRO (or any other outcome) and an intercurrent event may be dominated by the intercurrent event . Examples of composite outcomes include the EQ-5D measure and Quality-Adjusted Life Years or variants such as Quality-adjusted Time Without Symptoms or Toxicity . For transparency, results from a composite strategy should be accompanied by a measure of the incidence of the event. Furthermore, the interpretation of a composite outcome is difficult when the intercurrent event cannot meaningfully be put on the same scale as the PRO. For instance, to assign a single value to QoL after someone’s treatment has been discontinued, makes little sense. In the composite strategies of this case study, the assumption of QoL at 0 after death (as is also done in the EQ-5D measure ) is highly debatable and makes no sense from a clinical point of view as patients do not experience QoL after death. While composite endpoints may be common in some contexts such as Health Technology Assessment, we suggest caution in assigning a single PRO value to death or other intercurrent events. Choosing the most appropriate estimand in a study The advantages and limitations of each estimand discussed above can be considered when applying the estimand framework to PROs in a clinical study. The research setting determines the most relevant estimand, depending on, for example, the stakeholders involved, the type of PRO, and the PRO objective (e.g., to assess the efficacy or the tolerability of a new treatment). If PRO data are intended for review of payer or regulatory submission, discussions with these stakeholders to identify appropriate estimands and analyses are highly encouraged. Some studies explore more than one estimand to address multiple stakeholders. In that case, we recommend clearly specifying the targeted estimand with each presented result, for instance, in tables and figure legends. Estimates of different estimands have different interpretations: they are not estimates of the same quantity. It is therefore important to clarify the intended interpretation of each result when reporting on a study. Intercurrent events and the imputation of missing PROs Finally, we note that the choice of intercurrent event strategy and dealing with missing PRO data are two separate but related topics. Intercurrent events may hinder the measurement of PROs, causing missingness, and intercurrent events may be predictive for the missing values, e.g., QoL may decrease at a time of disease progression. Conditioning on information about intercurrent events in imputation may make a MAR assumption more plausible. In our analyses, we assumed non-informative censoring, as most censoring was administrative at the end of the study. If an informative censoring mechanism is more plausible, weighted GEE approaches or joint models may be used to account for such censoring . The single imputation method that we applied to address missing data was meant to generate a single example dataset for illustration of the estimand framework. Multiple imputation would better account for uncertainty in the missing values. An alternative way to account for missing PRO data would be to reweight observations by the inverse probability of missingness in the analysis. In addition, our MAR assumption may not hold. Finally, there were virtually no data after treatment discontinuation, so the relation between TD and PROs after TD could not be estimated from the data. Patients’ health status might deteriorate after treatment stops, but they may also switch to a new treatment that improves their QoL. We plan to explore imputation methods for longitudinal PROs in the presence of intercurrent events in detail in a future study. The absolute numerical value of the PRO, change scores and responder classification as possible endpoints of interest were illustrated on our dataset. The interpretation of change scores or responder/non-responder classifications may be obscured by floor and ceiling effects, a selection process due to death on baseline values, regression to the mean and the non-definitive nature of changes in the PRO (see Appendix A.2 for more discussion). Generally, the absolute numerical value of the PRO suffers from fewer drawbacks than other options and is the most direct representation of the data , especially for a descriptive study aim. A corresponding population summary might be the mean PRO at (a) prespecified time point(s). When dealing with death in the analysis of PROs, a while alive strategy most closely represents the actual or observed situation. Any (implicit) imputation of PROs after death implies that these values are missing and observable in principle. However, PROs after death are neither observable nor defined. The assumption of a hypothetical world in which patients do not die during the study was far removed from reality in our case study. This hypothetical scenario may therefore not be clinically relevant, particularly in groups of patients where the mortality rate is high. Especially in a SAT context, where the aim is often descriptive, it makes sense to stay close to the experienced reality and use a while alive strategy combined with an estimate of the survival probability. In a randomized trial context, a drawback of the while alive strategy may be that differences in case-mix arise between the trial arms over time because of differential survival. However, this would also be the case in future implementations of the treatment in similar populations. Crucially, marginal effects from a standard linear mixed model of PROs over time rely on implicitly imputed PRO values after death. It is important for researchers to be aware that the use of such a model implies a hypothetical estimand regarding death. For other intercurrent events, fitting an LMM that does not account for intercurrent events may correspond to different estimands depending on data availability. For example, if there are no data after TD, fitting an LMM corresponds to a hypothetical strategy for TD and death, whereas the same model may estimate a treatment policy estimand for TD if all data after TD are available until death. Population means while alive can be obtained by averaging individual predictions from a fitted LMM over those still alive. If direct estimates of population level means are of interest in a while alive strategy, we recommend the use of GEE with an independence correlation structure . The appropriate analysis will depend on the trial objective and stakeholders. For non-terminal intercurrent events such as treatment discontinuation, a treatment policy strategy seems a reasonable choice, as it aligns with the intention-to-treat principle. Often, it is relevant to know what to expect of a treatment, even after it is discontinued. Of course, this depends on the trial aim, for example, a while on treatment strategy may be appropriate for a tolerability objective. A treatment policy strategy for TD and PD requires data collection after these events, which is often limited as in our case study. After discontinuation of the trial treatment, patients may transfer to a different treatment center and/or enter another trial. This complicates the observation of PROs after TD, especially in single-arm trials and rare diseases. In this paper, we showed analyses where a ‘treatment policy’-strategy was always applied to TD or PD (or both). Of course, other combinations are possible, and the combination of strategies should be defined carefully. A composite outcome of a PRO (or any other outcome) and an intercurrent event may be dominated by the intercurrent event . Examples of composite outcomes include the EQ-5D measure and Quality-Adjusted Life Years or variants such as Quality-adjusted Time Without Symptoms or Toxicity . For transparency, results from a composite strategy should be accompanied by a measure of the incidence of the event. Furthermore, the interpretation of a composite outcome is difficult when the intercurrent event cannot meaningfully be put on the same scale as the PRO. For instance, to assign a single value to QoL after someone’s treatment has been discontinued, makes little sense. In the composite strategies of this case study, the assumption of QoL at 0 after death (as is also done in the EQ-5D measure ) is highly debatable and makes no sense from a clinical point of view as patients do not experience QoL after death. While composite endpoints may be common in some contexts such as Health Technology Assessment, we suggest caution in assigning a single PRO value to death or other intercurrent events. The advantages and limitations of each estimand discussed above can be considered when applying the estimand framework to PROs in a clinical study. The research setting determines the most relevant estimand, depending on, for example, the stakeholders involved, the type of PRO, and the PRO objective (e.g., to assess the efficacy or the tolerability of a new treatment). If PRO data are intended for review of payer or regulatory submission, discussions with these stakeholders to identify appropriate estimands and analyses are highly encouraged. Some studies explore more than one estimand to address multiple stakeholders. In that case, we recommend clearly specifying the targeted estimand with each presented result, for instance, in tables and figure legends. Estimates of different estimands have different interpretations: they are not estimates of the same quantity. It is therefore important to clarify the intended interpretation of each result when reporting on a study. Finally, we note that the choice of intercurrent event strategy and dealing with missing PRO data are two separate but related topics. Intercurrent events may hinder the measurement of PROs, causing missingness, and intercurrent events may be predictive for the missing values, e.g., QoL may decrease at a time of disease progression. Conditioning on information about intercurrent events in imputation may make a MAR assumption more plausible. In our analyses, we assumed non-informative censoring, as most censoring was administrative at the end of the study. If an informative censoring mechanism is more plausible, weighted GEE approaches or joint models may be used to account for such censoring . The single imputation method that we applied to address missing data was meant to generate a single example dataset for illustration of the estimand framework. Multiple imputation would better account for uncertainty in the missing values. An alternative way to account for missing PRO data would be to reweight observations by the inverse probability of missingness in the analysis. In addition, our MAR assumption may not hold. Finally, there were virtually no data after treatment discontinuation, so the relation between TD and PROs after TD could not be estimated from the data. Patients’ health status might deteriorate after treatment stops, but they may also switch to a new treatment that improves their QoL. We plan to explore imputation methods for longitudinal PROs in the presence of intercurrent events in detail in a future study. This case study has illustrated possible estimand definitions when dealing with PROs in a single-arm cancer trial and discussed considerations underpinning this choice. We have also provided an overview of corresponding statistical methods for the estimation of each estimand. Our findings show that trial analysis results and their interpretation strongly depend on the chosen estimand. The estimand framework provides a structure to match the research question in a trial with a well-defined target of estimation, supporting specific clinical decisions. Adherence to this framework can help improve the quality of data collection, analysis and reporting of PROs in SATs and thereby increase end-users’ insight and confidence in their results, impacting decision making in clinical practice. Supplementary Material 1. Supplementary Material 2.
Mutational Landscapes of Smoking-Related Cancers in Caucasians and African Americans: Precision Oncology Perspectives at Wake Forest Baptist Comprehensive Cancer Center
b137117e-b705-49d0-bcf9-c161013cc564
5562225
Pathology[mh]
Advances in genomics and informatics have validated the importance of individuality in cancer diagnosis and treatment. Evidence illustrates that cancer is a disease of genetic and epigenetic causality, profoundly affected by environment and lifestyle . An increasing number of genetic alterations have been characterized that drive the pathogenesis of cancer and convey therapeutic actionability . These driver mutations often are not restricted to a specific cancer type, histology or patient demographic. This unprecedented molecular understanding of individual cancers has ushered in a new era of health care coined precision medicine . Precision medicine has begun a reprogramming of clinical oncology practice , . Specialization in organ-oriented disease is being supplemented with molecular target assessment and targeted treatment across cancer types - . New clinical trial models (e.g., BASKET trial, NCI-MATCH) emphasize treatment decisions based on druggability of gene mutations rather than tumor histology , . Precision medicine consortia have formed to test this new mode of cancer management . The Precision Medicine Exchange Consortium (PMEC) is one such consortium, consisting of eight major cancer centers in the US, including the Wake Forest Baptist Comprehensive Cancer Center (WFBCCC). To investigate the relationship between precision medicine-derived cancer genomic correlates and patient demographics at WFBCCC, 431 cancer patients were enrolled into the Wake Forest Precision Oncology Initiative trial. This patient cohort reflects the patient population in the WFBCCC catchment area with a high representation of tobacco-related cancers (e.g., lung, colorectal, and bladder) and African American (AA) ancestry (13.5%). In the WFBCCC catchment area, (22% of adults are current tobacco users versus 19% nationally). Cigarette smoke is a known carcinogen, causing defined mutational signatures , . However, smoking-related genetic changes in cancer are not well-characterized. Even more unclear is whether the mutational events differ between AA and Caucasian cancer patients, despite AA cancer patients having a poorer prognosis, including cancer-related and higher overall mortality rates . Here, we report the characterization of the mutational landscapes of our unique cohort of cancer patients with findings validated in The Cancer Genome Atlas (TCGA) dataset. We also provide examples of mutation directed treatment in these patients to demonstrate the clinical impact of precision oncology initiatives. Patient Cohort Four hundred thirty-one cancer patients from the catchment area of the WFBCCC participated in the IRB-approved Wake Forest Precision Oncology Initiative (POI) from March 1, 2015 to May 30, 2016. African American status is based on self-reported ancestry. Each patient was consented for research analysis of sequencing results. Tumor specimens were evaluated by two board-certified pathologists to confirm diagnosis and classification. Tumor biopsies and surgical specimens were formalin-fixed and paraffin-embedded immediately following acquisition, according to standard clinical protocol. Tumor blocks of sufficient cellularity (>20%) and limited necrosis were selected and submitted to Foundation Medicine for FoundationOne® testing. The clinical management process is shown in Figure and supplementary methods. ClinicalTrials.gov Identifier : NCT02566421 Genomic Profiling Tumor tissue was subjected to Next Generation Sequencing (NGS) to identify mutations, rearrangements and copy number alterations spanning 415 cancer-related genes that make up the FoundationOne ® (F1) test (Foundation Medicine, Cambridge, MA) (Supplementary Methods) . Statistical and Bioinformatic Analysis Nonsynonymous somatic mutation calls were quantified. Patients were assigned to low or high mutation load groups based on the cohort mean mutation number. Fisher's exact test and Benjamini-Hochberg multiple testing adjustments were used to determine associations between mutation load and DNA damage genes and chromatin remodeling genes. Smoking status was defined by self-reported smoking history obtained from Cancer Registry and/or Epic Electronic Medical Record. Never smokers were defined as respondents who smoked less than 100 cigarettes in their lifetime. Based on evidence that smoking cessation reduces cancer risk by half at five years, active smokers at the time of clinical data collection and those who had quit smoking within the previous five years were considered current/recent smokers , . Those having quit more than five years prior to data collection were defined as former smokers. Only white (Caucasian) and black (AA patients) were included in disparities analyses, as these are the two main ethnic groups of the WFBCCC catchment area. Other racial/ethnic populations were underrepresented in the sample (less than 5%). Analyses for discovery of smoking-related mutations focused on genes with functional roles in DNA Damage Repair and Chromatin Remodeling. Each set of analyses used the Cochran-Mantel- Haenszel test to uncover associations between smoking status (defined as an ordinal variable - Never, Former, Recent) and gene mutation. Fisher's exact test was used to assess significance (p < 0.05) of gene mutation frequencies that differed with respect to low and high mutation load and racial status (Caucasians versus AA). The Hochberg (1988) approach was used to adjust for multiple testing . MutSig algorithm, MutSigCV, was used to evaluate the significance of mutated genes. All analyses were performed with R statistical computing software version 3.3.0 . Mutagenic processes and tumor clonality were analyzed with R packages somaticSignatures and SciClone, respectively (Supplementary Methods) . Four hundred thirty-one cancer patients from the catchment area of the WFBCCC participated in the IRB-approved Wake Forest Precision Oncology Initiative (POI) from March 1, 2015 to May 30, 2016. African American status is based on self-reported ancestry. Each patient was consented for research analysis of sequencing results. Tumor specimens were evaluated by two board-certified pathologists to confirm diagnosis and classification. Tumor biopsies and surgical specimens were formalin-fixed and paraffin-embedded immediately following acquisition, according to standard clinical protocol. Tumor blocks of sufficient cellularity (>20%) and limited necrosis were selected and submitted to Foundation Medicine for FoundationOne® testing. The clinical management process is shown in Figure and supplementary methods. ClinicalTrials.gov Identifier : NCT02566421 Tumor tissue was subjected to Next Generation Sequencing (NGS) to identify mutations, rearrangements and copy number alterations spanning 415 cancer-related genes that make up the FoundationOne ® (F1) test (Foundation Medicine, Cambridge, MA) (Supplementary Methods) . Nonsynonymous somatic mutation calls were quantified. Patients were assigned to low or high mutation load groups based on the cohort mean mutation number. Fisher's exact test and Benjamini-Hochberg multiple testing adjustments were used to determine associations between mutation load and DNA damage genes and chromatin remodeling genes. Smoking status was defined by self-reported smoking history obtained from Cancer Registry and/or Epic Electronic Medical Record. Never smokers were defined as respondents who smoked less than 100 cigarettes in their lifetime. Based on evidence that smoking cessation reduces cancer risk by half at five years, active smokers at the time of clinical data collection and those who had quit smoking within the previous five years were considered current/recent smokers , . Those having quit more than five years prior to data collection were defined as former smokers. Only white (Caucasian) and black (AA patients) were included in disparities analyses, as these are the two main ethnic groups of the WFBCCC catchment area. Other racial/ethnic populations were underrepresented in the sample (less than 5%). Analyses for discovery of smoking-related mutations focused on genes with functional roles in DNA Damage Repair and Chromatin Remodeling. Each set of analyses used the Cochran-Mantel- Haenszel test to uncover associations between smoking status (defined as an ordinal variable - Never, Former, Recent) and gene mutation. Fisher's exact test was used to assess significance (p < 0.05) of gene mutation frequencies that differed with respect to low and high mutation load and racial status (Caucasians versus AA). The Hochberg (1988) approach was used to adjust for multiple testing . MutSig algorithm, MutSigCV, was used to evaluate the significance of mutated genes. All analyses were performed with R statistical computing software version 3.3.0 . Mutagenic processes and tumor clonality were analyzed with R packages somaticSignatures and SciClone, respectively (Supplementary Methods) . Mutational Analysis We analyzed 431 cancer patients from the catchment area of the WFBCCC that participated in the IRB-approved Wake Forest Precision Oncology Initiative (POI). Patient demographics are summarized in Table , Table and Figure . In our patients, the most frequently mutated genes were tumor suppressor genes TP53, APC, FAT1, RB1, BRCA2 , and NF1 ; Wnt signaling pathway genes LRP1B and APC ; oncogenes KRAS, PIK3A , DNA damage repair (DDR) genes ( ATM, BRCA2 ); chromosomal integrity genes ( TERT ), and chromatin remodeling (CR) genes ( KMT2D or MLL2, KMT2C or MLL3, ARID1A, ARID1B, EP300 ) (Figure A). Some of the observed gene mutations were expected. For example, TP53 showed a uniformly high frequency of mutation across all cancer types while APC was predominantly mutated in colorectal cancer. KRAS was mutated at high frequency in pancreatic, colorectal, and lung cancer. LRP1B was frequently mutated only in lung cancer (43 of 90, 47.8%). Another gene highly mutated in lung cancer was SPTA1 (33 of 90, 37%) that has unknown oncogenic functions. Analysis of The Cancer Genome Atlas (TCGA) lung cancer cohort (adenocarcinomas and squamous) validated the frequent mutation of SPTA1 gene (Figure ). EPHA3 and EPHA5 were also frequently mutated in both our and TCGA cohorts (Figure ). TERT , which codes for telomerase and is involved in the longevity of tumor cells, was found to be frequently mutated at a promoter hot spot (-124C > T) in brain tumors (16 of 31, 53%), bladder cancers (9 of 16, 56%), and head/neck cancers (6 of 26, 23%), consistent with recent reports , . In contrast, the TERT promoter is rarely mutated in colorectal, lung or pancreatic cancer, or soft tissue sarcoma (Figure B). A striking observation was the remarkably high mutation rates of DDR and CR genes in our cohort and their association with high-mutational load (Figure A, B, C), underscoring the highly unstable genome associated with smoking-related cancers that dominate our cohort. Large numbers of gene mutations (hypermutation phenotype) and copy number alterations (chromosomal instability or CIN) represent two different types of genomic instability . We observed that the CIN phenotype exhibited variable patterns in different cancer types, with extensive overall changes in lung and colorectal cancers (Figure A). Despite high mutation rates, two smoking-related cancer types, bladder and head/neck cancers, did not show extensive copy number alterations (Figure C, Figure A). Among the most extensively amplified genes were oncogenes, including ERRB2, MYC, MET, CDK6 , and EGFR (e Figure B). Two cases exhibited amplification of immunosuppressing genes PD-L1 ( CD274 ) and PD-L2 (PDCD1LG2) , suggesting a role for anti-PD-1 or anti-PD-L1 therapy. Genes frequently deleted in our advanced cancer cohort are CDKN2A/B and PTEN (Figure B). Gene Mutations Associated with Smoking In our cohort, proportions of smokers were similar in AAs and Caucasian-Americans (38 of 58, 216 of 356, respectively; Fisher exact test p-value, 0.56). Current/recent smokers exhibited a significantly higher mutational load (mean = 20.5, median = 14.0) than former smokers (mean = 13.0, median = 11.5; p = 0.017, 2-sided t-test) and never smokers (mean = 12.3, median = 11.0; p = 0.029, 2-sided t-test). Analysis of total mutations per cancer showed a heterogeneous pattern (Figure ) with lung, bladder, and colorectal cancer exhibiting high tumor mutational load. Appendiceal, brain, and prostate cancers exhibited the lowest mutational load. Analysis of the mutational signatures characterized by nucleotide changes in the context of neighboring nucleotides identified three major signatures (Figure A). Current smokers, former smokers and never smokers exhibited distinct mutational signatures (Figure B). Many DDR and CR genes exhibited associations with smoking status (Figure C), with a greater frequency of mutation in current/recent or former smokers as compared to never smokers. After adjusting for multiple testing, mutations in two DDR genes - CDK12 and BRCA2 met the criteria for statistical significance (p = 0.0069 and 0.016, respectively). Similarly, the CR gene KMT2D met the criteria for statistical significance (p = 0.0087), while two others ( KDM6A and SMARCA4 ) were nominally significant (p = 0.026 and 0.032, respectively) (Figure C, Table ). To begin validation in the TCGA cohort we found that among the solid tumors, mutation and smoking status data was available in 2,821 cases. As shown in Figure D, current and former smokers have similar mutation frequencies of these genes. This analysis showed that most smoking-related gene mutations found in our cohort (e.g., KMT2D, BRCA2 ) were validated in the TCGA cohort. Intratumoral clonal heterogeneity poses serious challenges to precision oncology treatment . Tumors comprised of multiple clones with different mutational events may require multiple targeting strategies; in combination or in sequence. We quantified tumor clonal heterogeneity based on clustering of variant allele fractions (Figure A, B, see Methods). Mutation rates for 48% of patients were relatively low with no clonal diversity. For the others, 19, 23, and 10% of cases exhibited 1, 2 or more than 2 clones, respectively, based on clonality analysis (Figure C). Higher clonality was associated with smoking (Figure E). Gene Mutations Associated with Race The overall mutational landscape of AA patients is similar to that of the whole WFBCCC cohort, the majority of which are Caucasian patients (Figure A and Figure A). However, our analysis revealed differential mutation rates in the key genes, TP53 and KMT2C (Figure B). In the TCGA cohort, there are 842 AA and 7,149 Caucasian cases with mutation data and 892 AA and 7,679 Caucasians with gene amplification data. TP53 (p = 0.027), and to a lesser extent, KMT2C (p = 0.093), were more frequently mutated in AA patients in the TCGA cohort (Figure C). Gene copy number analysis revealed marked differences in five oncogenes in our cohort (Figure D); all of them were found to be more significantly amplified in AA in the TCGA cohort (Figure E). Precision Oncology Case Reports The essence of precision oncology is to match mutational information with drugs that have shown therapeutic efficacy in targeting the mutated protein. Oncologists at WFBCCC have designed clinical treatment regimens based on genomics testing in our Precision Oncology Trial and patients have shown remarkable responses. Key examples are described in . We analyzed 431 cancer patients from the catchment area of the WFBCCC that participated in the IRB-approved Wake Forest Precision Oncology Initiative (POI). Patient demographics are summarized in Table , Table and Figure . In our patients, the most frequently mutated genes were tumor suppressor genes TP53, APC, FAT1, RB1, BRCA2 , and NF1 ; Wnt signaling pathway genes LRP1B and APC ; oncogenes KRAS, PIK3A , DNA damage repair (DDR) genes ( ATM, BRCA2 ); chromosomal integrity genes ( TERT ), and chromatin remodeling (CR) genes ( KMT2D or MLL2, KMT2C or MLL3, ARID1A, ARID1B, EP300 ) (Figure A). Some of the observed gene mutations were expected. For example, TP53 showed a uniformly high frequency of mutation across all cancer types while APC was predominantly mutated in colorectal cancer. KRAS was mutated at high frequency in pancreatic, colorectal, and lung cancer. LRP1B was frequently mutated only in lung cancer (43 of 90, 47.8%). Another gene highly mutated in lung cancer was SPTA1 (33 of 90, 37%) that has unknown oncogenic functions. Analysis of The Cancer Genome Atlas (TCGA) lung cancer cohort (adenocarcinomas and squamous) validated the frequent mutation of SPTA1 gene (Figure ). EPHA3 and EPHA5 were also frequently mutated in both our and TCGA cohorts (Figure ). TERT , which codes for telomerase and is involved in the longevity of tumor cells, was found to be frequently mutated at a promoter hot spot (-124C > T) in brain tumors (16 of 31, 53%), bladder cancers (9 of 16, 56%), and head/neck cancers (6 of 26, 23%), consistent with recent reports , . In contrast, the TERT promoter is rarely mutated in colorectal, lung or pancreatic cancer, or soft tissue sarcoma (Figure B). A striking observation was the remarkably high mutation rates of DDR and CR genes in our cohort and their association with high-mutational load (Figure A, B, C), underscoring the highly unstable genome associated with smoking-related cancers that dominate our cohort. Large numbers of gene mutations (hypermutation phenotype) and copy number alterations (chromosomal instability or CIN) represent two different types of genomic instability . We observed that the CIN phenotype exhibited variable patterns in different cancer types, with extensive overall changes in lung and colorectal cancers (Figure A). Despite high mutation rates, two smoking-related cancer types, bladder and head/neck cancers, did not show extensive copy number alterations (Figure C, Figure A). Among the most extensively amplified genes were oncogenes, including ERRB2, MYC, MET, CDK6 , and EGFR (e Figure B). Two cases exhibited amplification of immunosuppressing genes PD-L1 ( CD274 ) and PD-L2 (PDCD1LG2) , suggesting a role for anti-PD-1 or anti-PD-L1 therapy. Genes frequently deleted in our advanced cancer cohort are CDKN2A/B and PTEN (Figure B). In our cohort, proportions of smokers were similar in AAs and Caucasian-Americans (38 of 58, 216 of 356, respectively; Fisher exact test p-value, 0.56). Current/recent smokers exhibited a significantly higher mutational load (mean = 20.5, median = 14.0) than former smokers (mean = 13.0, median = 11.5; p = 0.017, 2-sided t-test) and never smokers (mean = 12.3, median = 11.0; p = 0.029, 2-sided t-test). Analysis of total mutations per cancer showed a heterogeneous pattern (Figure ) with lung, bladder, and colorectal cancer exhibiting high tumor mutational load. Appendiceal, brain, and prostate cancers exhibited the lowest mutational load. Analysis of the mutational signatures characterized by nucleotide changes in the context of neighboring nucleotides identified three major signatures (Figure A). Current smokers, former smokers and never smokers exhibited distinct mutational signatures (Figure B). Many DDR and CR genes exhibited associations with smoking status (Figure C), with a greater frequency of mutation in current/recent or former smokers as compared to never smokers. After adjusting for multiple testing, mutations in two DDR genes - CDK12 and BRCA2 met the criteria for statistical significance (p = 0.0069 and 0.016, respectively). Similarly, the CR gene KMT2D met the criteria for statistical significance (p = 0.0087), while two others ( KDM6A and SMARCA4 ) were nominally significant (p = 0.026 and 0.032, respectively) (Figure C, Table ). To begin validation in the TCGA cohort we found that among the solid tumors, mutation and smoking status data was available in 2,821 cases. As shown in Figure D, current and former smokers have similar mutation frequencies of these genes. This analysis showed that most smoking-related gene mutations found in our cohort (e.g., KMT2D, BRCA2 ) were validated in the TCGA cohort. Intratumoral clonal heterogeneity poses serious challenges to precision oncology treatment . Tumors comprised of multiple clones with different mutational events may require multiple targeting strategies; in combination or in sequence. We quantified tumor clonal heterogeneity based on clustering of variant allele fractions (Figure A, B, see Methods). Mutation rates for 48% of patients were relatively low with no clonal diversity. For the others, 19, 23, and 10% of cases exhibited 1, 2 or more than 2 clones, respectively, based on clonality analysis (Figure C). Higher clonality was associated with smoking (Figure E). The overall mutational landscape of AA patients is similar to that of the whole WFBCCC cohort, the majority of which are Caucasian patients (Figure A and Figure A). However, our analysis revealed differential mutation rates in the key genes, TP53 and KMT2C (Figure B). In the TCGA cohort, there are 842 AA and 7,149 Caucasian cases with mutation data and 892 AA and 7,679 Caucasians with gene amplification data. TP53 (p = 0.027), and to a lesser extent, KMT2C (p = 0.093), were more frequently mutated in AA patients in the TCGA cohort (Figure C). Gene copy number analysis revealed marked differences in five oncogenes in our cohort (Figure D); all of them were found to be more significantly amplified in AA in the TCGA cohort (Figure E). The essence of precision oncology is to match mutational information with drugs that have shown therapeutic efficacy in targeting the mutated protein. Oncologists at WFBCCC have designed clinical treatment regimens based on genomics testing in our Precision Oncology Trial and patients have shown remarkable responses. Key examples are described in . Smokers and AAs are more prevalent among our disproportionately rural, Appalachian/Piedmont catchment area population. Thus, we are able to uniquely interrogate mutations associated with these two understudied populations. This undertaking has provided a number of insights. Among the most interesting discoveries are the revelations that DDR and CR genes are highly mutated in current/former smokers, and smoking is associated with augmented clonal evolution (clonality) and tumor heterogeneities. This is consistent with recent genomic characterization of smoking related cancers . These results provide strong evidence that genomic instability is a fundamental hallmark of cancer and the events underlying the regulation of genome stability are centered on interactions with environmental factors and lifestyle. AA cancer patients have a more dismal prognosis, which represents a key health disparity challenge in the US. Our genomics analysis revealed a number of genes mutated at different frequencies in AA and Caucasian cancer patients. After further analysis of the larger independent TCGA cohort, mutations of the tumor suppressor gene TP53 still emerged as a more common event in AA cancer patients. Notably, in our cohort of lung cancer, mutation rates for a number of genes including TP53 are higher than that observed in the TCGA cohort, consistent with the predominance of advanced and smoking-related cancers in our cohort. However, our analysis showed that the enriched mutation in TP53 in AA is not driven by lung cancer in the cohort because TP53 mutation rates are similar in AA and Caucasian Americans (p = 0.5). TP53 has long been recognized as a critical control gene for genome stability . Numerous studies have shown that mutations of TP53 are associated with poor prognosis in cancer . Therefore, genomic stability regulated by TP53 may be a key factor that contributes to cancer outcome disparities among different racial groups. The limitation of this study is the size of cohort enrolled in precision oncology initiatives due to the enrollment criteria and cost associated with the clinical sequencing tests. Therefore, future data sharing effort will enable pooled analysis of all the major precision oncology programs in the country to determine whether genetic events such as increased TP53 mutation rates are observed in all major cancer types and their relationship with smoking. Interestingly, during the review of our study, a recent paper focusing on lung cancer reported overall similar mutation frequencies between AA and Caucasian American, however they also observed more prevalent TP53 mutation in AA subgroup than Caucasian group . In addition to gaining insight into the knowledge of genetic/molecular mechanisms of cancer development/progression, a key benchmark for precision oncology initiatives is the translatability of genomics information to more accurately targeted and beneficial treatments in patients , - . Several successful examples at WFBCCC are reported here and described in supplemental document. There is no doubt that increasing numbers of cancer patients will benefit from the precision oncology design. There are, however, a number of important challenges and limitations . First, the current precision oncology initiatives focus more on advanced metastatic cancer patients. Many of these patients die within 3-4 months of the genomic testing, before treatment decisions can be rendered. Thus, genomic testing should extend to patients with newly diagnosed metastatic disease, with the hypothesis that a patient with longer expected survival will benefit more from precision treatment. Retesting of tumors from recurrent patients will identify treatment-associated mutations to revise therapeutic strategies. Secondly, drug availability is a major problem - . Many FDA approved but off-label drugs, are not covered by insurance. Getting access to these off-label drugs on a compassionate basis invariably requires the resources and extra time of physicians. There clearly is a need for a streamlined process of drug acquisition for precision oncology to reach its full potential. Thirdly, genomic testing reveals many gene mutations without information about whether these mutations are deleterious (driver mutations). Thus, there is a clear need for efficient high throughput laboratory assays to identify functional mutations . Fourthly, intratumoral heterogeneity poses a significant obstacle for sustained treatment response to a single agent therapy , . Our clonality analyses showed that different clones exist in a fraction of tumors with different potential driver mutations. Therefore, precision oncology requires an understanding of tumor clonality to inform the design of combination or sequential therapy with different drugs. Finally, from the patients' perspective, these complexities are compounded by the psychosocial and ethical considerations inherent to the genomic profiling process , . In this newly evolving paradigm, patients and providers need to navigate care from a patient-centered framework. In the decision making process, smoking status and ethnicity should clearly be considered because of the association with differential mutation rates.
Evaluation of ophthalmic surgical simulators for continuous curvilinear capsulorhexis training
bd1fbc68-7115-4bfb-a9ef-1d4a16305870
9018214
Ophthalmology[mh]
The study was approved by the Institutional Review Board of the Albert Einstein College of Medicine and was conducted in association with the Office of Clinical Trials and the Henkind Eye Institute at the Montefiore Medical Center in the Bronx, New York. Funding for the project was provided by a restricted educational grant from the Manhattan Eye and Ear Foundation. Three commonly used capsulorhexis simulators were chosen and sourced based on experience and availability, namely the Kitaro DryLab model (Frontier Vision Co., Ltd.), SimulEYE SimuloRhexis model (InsEYEt, LLC), and the Bioniko Rhexis (Bioniko Consulting LLC). The Kitaro DryLab model (Figure , a) has a central pupil diameter of 14.0 mm with an open-sky configuration and prefabricated openings that simulate clear corneal incisions. The simulated capsule is composed of a 5-micron-thick, polyester bilayer that comes on a roll allowing multiple attempts. The capsule is placed slightly taut on a reusable artificial resin clay nucleus that simulates a cataract. In this study, the simulated eye was mounted within a rubber face to simulate human facial contours. As recommended by the manufacturer, ophthalmic viscosurgical device was placed on the surface of the capsular film to simulate an anterior chamber. The SimuloRhexis model (Figure , b), with a physiological central pupil diameter of 8.0 mm, features an anterior chamber that can be filled with ophthalmic viscosurgical device and an artificial cornea that requires a standard keratome incision prior to the CCC, as is performed in actual cataract extraction. This model suctions directly onto a flat surface and allows the user to simulate variable posterior pressure by mechanically adjusting the base of the simulator. The Bioniko Rhexis model (Figure , c), with a central pupil diameter of 9.0 mm, was stabilized with the recommended Mini Holder prior to use in this study. Similar to Kitaro DryLab, this model has an open-sky configuration but, by contrast, features a limbal corneal ridge that can be incised with a standard keratome blade. To maintain proper consistency of the material, the entire surface was moistened with water prior to use as per recommendations. Expert cataract surgeons (N = 7), defined as having performed greater than 1000 primary cases, were identified, and informed consent was obtained. Each surgeon was tasked to create a 5.5-mm CCC on all three simulators, which were presented in a randomized sequence for a total of three trials on each model. The study was performed under standard operating room conditions at the Hutchinson Metro Center Operating Suite in Bronx, New York. With a sample size of 7 surgeons performing a total of 63 total trials, the study had 80% power with a 2-sided type I error rate of 5% to detect a minimum effect size of 1.3 in the measured outcomes among simulators. The surgeons were instructed to position themselves as they would for an actual procedure, and foot-pedal controlled Zeiss Lumera microscopes with recording capabilities were used for each trial. The unmarked and previously prepared simulators were each placed directly in front of the surgeons on a raised metal tray table in randomized fashion. The standardized materials used included the following: a dual-bevel, 2.75-mm microkeratome blade to make the clear corneal incisions for SimulEYE and Bioniko, dispersive ophthalmic viscosurgical device (VISCOAT) for Kitaro and SimulEYE, a standard bent cystotome needle on a 1-mL syringe to make the initial anterior capsular rent, and a pair of standard titanium Utrata forceps to create the CCC. The primary measured outcomes included the size of the completed CCC (millimeters), the number of capsular forceps manipulations (number of grabs) required, and the task duration (seconds). Immediately after each CCC attempt, surgeons were asked to subjectively rate on a modified Likert scale (1 to 7) how closely the model simulated human tissue using the following question: “On a scale from 1 to 7, with 1 signifying the least realistic simulation experience and 7 signifying the most realistic simulation experience, how well does this kit simulate performing a CCC on real tissue?” The names of the simulators were not revealed to the surgeons until after all trials were completed. Outcome measures were summarized for each kit and trial by computing means and standard deviations. In addition, multiple linear regression models that included kit, trial, and surgeon as predictor variables were fit to the data to assess the independent effects of each factor on each of the outcomes. A 2-sided P value less than 0.05 was considered statistically significant. All analyses were performed using SAS v. 9.4 (SAS Institute Inc.). A total of 63 trials (7 surgeons completing three trials on each of the 3 simulators) were performed in a randomized fashion. The results for each primary outcome are presented. There were statistically significant differences among the simulators and across the 3 trials for all outcome measures. Regarding size (maximum diameter in millimeters), surgeons created the 5.5-mm CCC most accurately on the Bioniko and SimulEye models. Surgeons performed the largest average CCC on the Kitaro model (8.00 ± 0.84) compared with both Bioniko (5.24 ± 0.60, P < .0001) and SimulEYE (5.11 ± 0.41, P < .0001). Across all simulators, CCC size was overall larger in the third trials (6.29 ± 1.56) compared with the first trials (5.94 ± 1.39, P = .003, Figure ). Surgeons spent more time (seconds) performing the CCC on Bioniko (41.95 ± 26.70) than on both Kitaro (32.05 ± 14.99, P = .02) and SimulEYE (28.90 ± 15.18, P = .002) and more time on average on trial 1 (42.24 ± 25.23) than that on trials 2 (28.48 ± 15.87, P = .001) and 3 (32.19 ± 16.44, P = .01, Figure ). Bioniko required a greater number of grabs (6.53 ± 3.14) than both Kitaro (4.90 ± 2.47, P = .01) and SimulEYE (3.90 ± 1.34, P < .0001). Trial 1 (6.19 ± 3.57) had a greater number of grabs than both trials 2 (4.33 ± 2.01, P = .002) and 3 (4.81 ± 1.63, P = .02, Figure ). The Kitaro (4.56 ± 0.84, P < .0001) and SimulEYE models (4.19 ± 0.92, P < .0001) were rated as more realistic by the surgeons than the Bioniko model (1.38 ± 0.80) on a 7-point modified Likert scale (Figure ). The highest numbers on the modified Likert scale represent the most realistic simulation experience. Ophthalmic surgical simulators are in popular use by residency training programs and offer novice surgeons the opportunity to practice complex maneuvers in preparation for actual surgery in a safe and controlled environment. Studies demonstrated improved performance by students and residents after practicing either on simulator devices or in the wet lab. – Specifically, Belyea et al. showed that surgeons who trained on EYESi had shorter phacoemulsification times, lower phacoemulsification power, fewer intraoperative complications, and a shorter learning curve on average than those who were not trained on EYESi. Pokroy et al. also found that ophthalmic surgical simulators shortened the learning curve for the first 50 cataract cases, with less adept residents benefiting the most from the training. It is imperative to note that both of these studies involved virtual reality surgical simulation through the EYESi module; neither used any of the 3 models that were used in this study. The Kitaro model has been studied for steps including the CCC; however, this was performed using the Da Vinci Robotic Surgical System on the Kitaro WetLab model. In our analysis, we chose the Kitaro DryLab model with manual CCC creation as this is the more commonly used training tool for this task. To the authors' knowledge, no studies have been reported on the Bioniko Rhexis or SimulEYE SimuloRhexis models. The advertised cost of materials to perform 100 CCCs, not accounting for institutional discounts, was $970 for Bioniko, $995 for Kitaro, and $715 for SimulEYE. Of note, the Kitaro kit uses a roll of replaceable capsular film that allows for multiple additional practice opportunities. From the perspective of the expert surgeons who participated in this study, the experience of creating the CCC on the SimulEYE SimuloRhexis and Kitaro DryLab simulator kits were believed to most closely approximate the experience of creating the CCC in a real-life cataract surgery. Surgeons also tended to perform the CCC faster on average with both of these simulators compared with the Bioniko model. This result is reasonable given the Bioniko model is designed to tear in a manner that allows more capsular grabs attempts. Regarding size, surgeons created a 5.5-mm diameter CCC most precisely on the Bioniko and SimulEYE models compared with the Kitaro model. We surmise that this is due to the naturally larger pupil diameter on the Kitaro model, which may have led to a tendency for surgeons to create a larger CCC. In general, surgeons performed faster CCCs over the three trials on the Kitaro and Bioniko models, suggesting a learning curve on these simulators with practice. Of interest, there was no significant learning curve with the SimulEYE model across the three trials, and surgeons' overall performance was the most consistent among the three trials on this model. Beyond the formal survey, extemporaneous comments from the surgeons regarding each of the models were also recorded in real-time during each CCC trial (Table ). Regarding task difficulty, it was noted that the Kitaro DryLab model was oversimplified relative to the SimulEye and Bioniko simulators, which incorporate the creation of a triplanar clear corneal incision. Furthermore, a distinct advantage of the SimulEYE SimuloRhexis model noted by the surgeons was the ability of the capsular tissue to remain everted between grabs. Some surgeons did find the SimulEye capsule to be overly brittle and tear more easily than a true capsule, however. Regarding the clear corneal incision, it was noted by some that the Bioniko Rhexis felt the most realistic as the consistency and memory of the wound felt similar to that of a true cornea. However, surgeons overwhelmingly found that the capsular tissue of the Bioniko model was overly friable and did not tear naturally. Of note, the Bioniko Rhexis model is purposefully designed to promote frequent capsular regrasping and allow for the assessment of the amount of corneal wound manipulation. This pilot study was designed to formally analyze both subjective and objective differences among the three simulators. The underlying assumption was that highly experienced surgeons can provide the most nuanced feedback comparing the simulators to human tissue. These results, however, do not necessarily validate the efficacy of these simulators in training novice surgeons. Larger case–control studies designed to formally evaluate learning curves, surgical complication rates, and possibly ergonomics are necessary to make broader conclusions and recommendations. To the authors’ knowledge, this is the first study to systematically evaluate CCC training simulators from the perspective of expert cataract surgeons. Although the SimulEYE SimuloRhexis was found in our study to have an advantage when looking at the overall performance and fidelity across the studied metrics, each of 3 capsulorhexis simulators tested have their own unique advantages and disadvantages. Each residency training program should make decisions on which simulator best suits their training needs based on an individual assessment and resources available. Further validation studies are needed to determine the effect the simulation training has on actual surgical outcomes for novice surgeons. WHAT WAS KNOWN Ophthalmic surgical simulators allow surgeons of all skill levels to practice specific steps of ophthalmic surgery in preparation for the operating room. The continuous curvilinear capsulorhexis (CCC) is a fundamental step of cataract surgery and one of the most challenging maneuvers for surgeons to master. WHAT THIS PAPER ADDS To the authors' knowledge, this is the first study to formally compare the experience of creating the CCC on a variety of ophthalmic surgical simulators from the perspective of expert cataract surgeons. This study presented objective and subjective feedback of CCC creation on surgical simulators, allowing residency programs to determine which simulators best suit their training needs. Ophthalmic surgical simulators allow surgeons of all skill levels to practice specific steps of ophthalmic surgery in preparation for the operating room. The continuous curvilinear capsulorhexis (CCC) is a fundamental step of cataract surgery and one of the most challenging maneuvers for surgeons to master. To the authors' knowledge, this is the first study to formally compare the experience of creating the CCC on a variety of ophthalmic surgical simulators from the perspective of expert cataract surgeons. This study presented objective and subjective feedback of CCC creation on surgical simulators, allowing residency programs to determine which simulators best suit their training needs.
Machine learning models including patient-reported outcome data in oncology: a systematic literature review and analysis of their reporting quality
600ebb84-8560-47d4-bc80-a55928cea23a
11538124
Internal Medicine[mh]
The amount of data produced by clinical trials, research projects, and patient care in oncology has grown exponentially in recent years . A valuable part of this data contains patient-reported outcomes (PROs), such as quality of life, that directly provide the patients’ views on the impact of disease and treatment on their health status without prior interpretation by clinicians . In oncology, PROs are recognized as critical endpoints for assessing the effectiveness of cancer treatments and interventions, particularly considering the calls for patient involvement and patient-centeredness . Potential clinical benefits of PROs range from enhanced symptom management and patient-clinician communication to improved overall survival . The growing availability of PRO data harbors both benefits and challenges. Large datasets can offer insightful information on the underlying mechanisms of cancer and prospective therapeutic targets . However, complex intelligent system tools that go beyond conventional statistical methods are needed to analyze this data. In this context, artificial intelligence (AI) algorithms are increasingly being used in medical research and applications, as they allow for the analysis of big data with reliability in a variety of medical fields, from radiology to genomics . Incorporating patient-reported outcome measures (PROMs) into AI studies humanizes AI in healthcare by integrating patients’ perspectives. This approach offers a comprehensive view of health, aiding collaborative decision-making beyond solely relying on traditional clinical endpoints like survival . Machine Learning (ML) is a subfield of AI and consists of computer algorithms that are trained to identify patterns in predictors (also called “features” in ML terminologies) and make predictions based on them. ML algorithms use various types of learning strategies, of which the most common are supervised and unsupervised learning. The ML type covered in this review incorporates supervised ML algorithms trained to predict predefined outcomes, by learning the patterns in the data and mapping predictors to desired outputs. After training, supervised ML models are tested on unseen data, and the performances of the prediction are evaluated. Unsupervised ML algorithms, instead, are trained without knowing the desired output. Unsupervised learning is typically used when researchers want to investigate new patterns in complex data. Deep learning (DL) is a subfield of ML and includes deep artificial neural networks, i.e., artificial neural networks composed of several layers of artificial neurons. Compared to classical ML algorithms, DL algorithms usually require significantly more training data and computation power . The increasing interest in ML in medicine, with a particular focus on supervised ML, requires methodological rigor in its application and reporting. The algorithms need to be trained and tested using appropriate datasets and methodology to ensure that the results and models are valid, reliable, and generalizable to the wider patient population . To offer an overview of published oncological studies reporting ML algorithms including PROM scores, the objectives of this systematic review were threefold: To assess whether there has been an increase in research publications about ML with PROM scores as a predictor or an outcome over the last years. To assess which ML algorithms and PROMs are most commonly used in the field of interest. To evaluate the quality of the reporting of applied ML models - according to a modified version of the “minimum information about clinical artificial intelligence modeling” (MI-CLAIM) checklist. Registration and reporting guideline The review was registered with Prospero (ID: CRD42023405660) and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline . Information sources We searched Pubmed and Web of Science for studies applying ML methods including PROM scores as a predictor and/or as an outcome within the oncology field and published until 13 December 2022 (the date of search). No filters were applied. The full search strategies and documentation are provided in Supplementary Tables and . The database search was complemented with a backward search of review publications identified in the screening process. Eligibility criteria References were excluded if they were reviews, guidance documents, or lacked a patient population diagnosed with cancer (excluding mixed populations with cancer and other diseases). It led to exclusion if publications did not involve supervised machine learning or deep learning models or did not include PROM scores as either a predictor or an outcome. Finally, references were only eligible if they were in English, and the full text was available to the authors. Study selection and data extraction The literature review software DistillerSR was used to provide full transparency of the 2-level study selection process and the subsequent data extraction. Two reviewers from a combination of three (DK, MC, NJH) worked independently to decide on the eligibility of each record on the abstract and full-text level and solved emerging conflicts. If no agreement was reached, a third reviewer (MS) was consulted. General study characteristics were extracted by one reviewer (DK). Data to answer the a priori-defined review questions on the quality of reporting of applied ML models were extracted by two reviewers (MC & PZ), who are technical experts in the ML field. Disagreements were discussed by these two reviewers. Data items (list of outcomes/variables extracted) Descriptive information, including cancer type, country, and sample size was extracted. The PROMs used were recorded, and it was evaluated whether PROM scores were included as predictors or outcomes in the respective ML model. Information on the type of ML model, its aim, the ratio of cross-validation training set to test set, and the best result were gathered. Assessment of the quality of reporting The quality of reporting appraisal followed a prespecified procedure, using an adapted version of the Minimum Information about CLincal Artificial Intelligence Modeling (MI-CLAIM) checklist . This checklist was developed with the purpose of assessing both the clinical impact, including fairness and biases, of an AI model and the replication of its technical design . The applied procedure was adapted from a previous study . Unlike to the work by Smets et al. , we removed two questions on comparison with state-of-the-art baseline methods, as it was not possible to identify baseline methods in this field of research. Standard operating procedures were established a priori to guarantee a common understanding of variables. In Table the adapted MI-CLAIM checklist is reported, including comments proposed in the publication by the authors , which guided the interpretation of the items. The checklist is composed of 10 items, which assess the quality of the reporting for (i) study design, (ii) data preparation and partitioning, (iii) model development, optimization, and final model selection, (iv) model performance, (v) model examination and (vi) reproducibility and transparency. Synthesis methods Frequencies and respective percentages of categorical variables (e.g., cancer type, country, PROM used) were calculated using SPSS v. 29.0.0.0. The calculation of the “ML Quality of Reporting Score” (MLQRS) followed this scheme: “completed” = 1 point and “not completed” = 0. The maximum achievable score was 10. The review was registered with Prospero (ID: CRD42023405660) and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline . We searched Pubmed and Web of Science for studies applying ML methods including PROM scores as a predictor and/or as an outcome within the oncology field and published until 13 December 2022 (the date of search). No filters were applied. The full search strategies and documentation are provided in Supplementary Tables and . The database search was complemented with a backward search of review publications identified in the screening process. References were excluded if they were reviews, guidance documents, or lacked a patient population diagnosed with cancer (excluding mixed populations with cancer and other diseases). It led to exclusion if publications did not involve supervised machine learning or deep learning models or did not include PROM scores as either a predictor or an outcome. Finally, references were only eligible if they were in English, and the full text was available to the authors. The literature review software DistillerSR was used to provide full transparency of the 2-level study selection process and the subsequent data extraction. Two reviewers from a combination of three (DK, MC, NJH) worked independently to decide on the eligibility of each record on the abstract and full-text level and solved emerging conflicts. If no agreement was reached, a third reviewer (MS) was consulted. General study characteristics were extracted by one reviewer (DK). Data to answer the a priori-defined review questions on the quality of reporting of applied ML models were extracted by two reviewers (MC & PZ), who are technical experts in the ML field. Disagreements were discussed by these two reviewers. Descriptive information, including cancer type, country, and sample size was extracted. The PROMs used were recorded, and it was evaluated whether PROM scores were included as predictors or outcomes in the respective ML model. Information on the type of ML model, its aim, the ratio of cross-validation training set to test set, and the best result were gathered. The quality of reporting appraisal followed a prespecified procedure, using an adapted version of the Minimum Information about CLincal Artificial Intelligence Modeling (MI-CLAIM) checklist . This checklist was developed with the purpose of assessing both the clinical impact, including fairness and biases, of an AI model and the replication of its technical design . The applied procedure was adapted from a previous study . Unlike to the work by Smets et al. , we removed two questions on comparison with state-of-the-art baseline methods, as it was not possible to identify baseline methods in this field of research. Standard operating procedures were established a priori to guarantee a common understanding of variables. In Table the adapted MI-CLAIM checklist is reported, including comments proposed in the publication by the authors , which guided the interpretation of the items. The checklist is composed of 10 items, which assess the quality of the reporting for (i) study design, (ii) data preparation and partitioning, (iii) model development, optimization, and final model selection, (iv) model performance, (v) model examination and (vi) reproducibility and transparency. Frequencies and respective percentages of categorical variables (e.g., cancer type, country, PROM used) were calculated using SPSS v. 29.0.0.0. The calculation of the “ML Quality of Reporting Score” (MLQRS) followed this scheme: “completed” = 1 point and “not completed” = 0. The maximum achievable score was 10. The search resulted in 1634 unique references that were screened for eligibility. The two-stage selection procedure resulted in 52 (3.2%) studies considered eligible for the review. For full traceability, Fig. shows the total study selection process. Study characteristics Twenty-five of the reviewed studies (48.1%) included patients with breast cancer, being the most frequent cancer type examined followed by lung cancer ( n = 13, 25.0%), and gastrointestinal cancers ( n = 11, 21.2%). Country-wise, the majority of studies were conducted in the USA ( n = 21, 40.4%). In total, the evaluated studies included 102 977 patients (IQR = 818, min = 25, max = 46104). Thirty-six (69.2%) studies included PROM scores as a predictor and 32 (61.5%) as an outcome. The most frequently used PRO questionnaires were measures provided by the European Organisation for Research and Treatment of Cancer (EORTC) (e.g., ) with ten (19.2%) studies using them, followed by the Short Form 12 or 36 (SF 12/36) ( n = 6, 11.5%), and the Edmonton Symptom Assessment System (ESAS) ( n = 5, 9.6%). A comprehensive overview of the study characteristics of the included studies is presented in Table . Figure illustrates the number of included publications per year. Type and quality of reporting of applied ML models Artificial neural networks were the ML algorithm mainly used in the included studies ( n = 14, 26.9%), followed by random forest classifiers ( n = 11, 21.2%). In the 36 studies where PROM scores were used as predictors to an ML model, the aim was mainly to predict other PROM scores ( n = 16, 44.4%) and survival/mortality ( n = 11, 30.6%). Only 9 (17.3%) of the included studies had an MLQRS higher or equal than 8, while the majority of studies had an MLQRS between 5 and 7 ( n = 28, 53.8%). Concerning single items of quality of reporting scores (see Supplementary Table for the results on an individual study-level), only a minority of studies clearly described data transformation ( n = 13, 25%), detailed model configuration, and hyperparameters ( n = 16, 30.8%), objectively discussed the reliability and robustness of the model ( n = 9, 17.3%) and provided code, pseudo-code, or data ( n = 3, 5.8%). See Table for a full overview of ML-related results, Fig. for a histogram portraying the distribution of the MLQRS across studies, and Supplementary Table for individual study-level results for both, PRO and ML variables combined. Twenty-five of the reviewed studies (48.1%) included patients with breast cancer, being the most frequent cancer type examined followed by lung cancer ( n = 13, 25.0%), and gastrointestinal cancers ( n = 11, 21.2%). Country-wise, the majority of studies were conducted in the USA ( n = 21, 40.4%). In total, the evaluated studies included 102 977 patients (IQR = 818, min = 25, max = 46104). Thirty-six (69.2%) studies included PROM scores as a predictor and 32 (61.5%) as an outcome. The most frequently used PRO questionnaires were measures provided by the European Organisation for Research and Treatment of Cancer (EORTC) (e.g., ) with ten (19.2%) studies using them, followed by the Short Form 12 or 36 (SF 12/36) ( n = 6, 11.5%), and the Edmonton Symptom Assessment System (ESAS) ( n = 5, 9.6%). A comprehensive overview of the study characteristics of the included studies is presented in Table . Figure illustrates the number of included publications per year. Artificial neural networks were the ML algorithm mainly used in the included studies ( n = 14, 26.9%), followed by random forest classifiers ( n = 11, 21.2%). In the 36 studies where PROM scores were used as predictors to an ML model, the aim was mainly to predict other PROM scores ( n = 16, 44.4%) and survival/mortality ( n = 11, 30.6%). Only 9 (17.3%) of the included studies had an MLQRS higher or equal than 8, while the majority of studies had an MLQRS between 5 and 7 ( n = 28, 53.8%). Concerning single items of quality of reporting scores (see Supplementary Table for the results on an individual study-level), only a minority of studies clearly described data transformation ( n = 13, 25%), detailed model configuration, and hyperparameters ( n = 16, 30.8%), objectively discussed the reliability and robustness of the model ( n = 9, 17.3%) and provided code, pseudo-code, or data ( n = 3, 5.8%). See Table for a full overview of ML-related results, Fig. for a histogram portraying the distribution of the MLQRS across studies, and Supplementary Table for individual study-level results for both, PRO and ML variables combined. The Artificial Intelligence Index Report 2021 states that the number of peer-reviewed AI publications grew by nearly 12 times between 2000 and 2019 , revealing the hype the field has experienced in the last two decades. The development is mirrored in the niche of PRO research in oncology, covered in this review. The number of publications reporting ML models including PROM scores increased 20-fold between 2011 and 2021. The trend continued, albeit at a slower pace, through 2022. The predominance of the measurement system of the EORTC aligns with a recent review showing that more than half of cancer clinical trials in six main cancer populations published between 2014 and 2019 used EORTC PROMs. Looking at the architecture of included ML studies, most of these included in the review employed artificial neural networks (ANN), not designed as deep neural networks, as the final model. It is also interesting to observe that out of 16 studies where ANN and multiple other algorithms were tried, ANN was selected as the best classifier in 9 (56.3%) and was part of the final ensemble model in another four works (25%). Although ANNs are generally more computational expensive than other simpler algorithms, they can detect complex and non-linear relationships between inputs and outputs, making them potentially superior to other methods. Similarly, random forest (RF), the second most employed algorithm in the included studies, is an ensemble of decision trees that typically outperforms single classifiers . In contrast to current ML research in cancer prediction and diagnosis, where more models are based on deep learning models , only three of the included studies proposed deep neural network architectures. The cause of this can be manifold. First, PROM scores have the typical data structure of features (i.e., structured variables to be given in input to a model), which perhaps influenced researchers opting more for classical ML methods than for DL algorithms, which are typically more suited for input data like images or signals. Second, the sample size of the included studies was generally low and DL algorithms usually require a big enough sample size to be properly trained and validated. Third, because of their complexity, it is more difficult to understand how DL models make predictions and decisions compared to classical ML ones . In the medical field, interpretability is a crucial factor in including ML models in clinical work . Although many research groups have developed methods to try to explain the decision process of DL models, such methods are not univocally accepted by the medical community . On the other side, feature importance and sensitivity analyses provide a better understanding of the decision-making process of ML models . The majority of the studies included in the review used these approaches to explain the decision-making process of the proposed models. With the increasing size and complexity of datasets, it is likely that shortly the field will move to investigate deep learning architectures to evaluate PROMS in oncology. In particular, natural language processing methods to analyze both structured as well as unstructured PROM scores might be a promising way to explore . In the field of natural language processing, there is also a great advancement in the field of interpretability and explainability, which might facilitate not only the investigation of these models but also their future integration . With a mean quality of reporting score of 5.7 across the included studies, this work shows that, overall, the quality of reporting of ML models is relatively low, similar to what was shown in related work in the field of oncology without any PRO focus . The results of another recent review indicate a high risk of bias in prognostic models using ML in oncology . A common issue affecting most of the included publications concerns reproducibility, as details on data transformations, model parameters, and open-source code were available only for the minority of the studies. The lack of reproducibility of ML models including PROM scores in medical research is a known problem and strategies to overcome it should be implemented as soon as possible . Furthermore, only very few studies (7.7%) validated their models on external datasets. External validation means that a trained ML model is applied to a different population or setting . To integrate ML models in clinical practice, such external validation is essential, as only through this step it is possible to ensure that a model is generalizable to a new population and that it does not suffer from either overfitting, unfairness, or bias . This is particularly important for the models included in this review, which were generally trained in small datasets. External validation is particularly critical in oncology, where patient populations, treatment regimens, and healthcare settings vary considerably . Without transparent and high-quality reporting, it is hard to understand whether the models might be affected by unfairness and biases. As an example, the lack of a clear description that the dataset is diverse and representative of different conditions (e.g. patients with different ages, genders, and ethnicities) might reflect the development of a biased model. When applied to an underrepresented population with different characteristics, this model might perform differently thanexpected, affecting patient outcomes and raising ethical questions . As reproducibility and external validations are prerequisites for the implementation of ML algorithms in clinical practice and due to the relatively low quality of reporting of the ML models included in this work, our study suggests that the integration of the proposed models in clinical practice is still far to be achieved. The availability of tools such as the MI-CLAIM should guide researchers to improve the quality of reporting of ML algorithms. Limitations of the review processes applied The MI-CLAIM checklist has been developed with the aim of evaluating the reporting quality of ML algorithms and should not be therefore seen as a tool to evaluate the quality of the ML modelling itself. As the ML algorithms included in this systematic review address different research questions and use different datasets, a direct comparison of their quality in terms of classification performances was not possible. As an additional limitation, we also noticed that the interpretation of the MI-CLAIM checklist is not always objective and univocal. We found it particularly difficult to objectively evaluate the presence of a clear discussion of the reliability and robustness of the model (point 9 in the checklist). Furthermore, we excluded studies proposing unsupervised learning models, as some items of the MI-CLAIM checklist (Validation methodology, Train/test independence, and Performance metrics) were not applicable, as already reported in . One limitation of this review is that it relied on only two major databases for the literature search. While these databases were selected for their extensive coverage of relevant studies, including additional databases could have provided a broader scope and potentially captured additional relevant literature. Implications of results for practice, policy, and future research With regulatory bodies starting to systematically emphasize the use of ML in medical research and drug development and the AI act of the European Union coming into effect , the quality of reporting of ML models including PROM scores needs to be prioritized. The herein performed critical examination of the status quo of ML models including PROM scores in oncology allowed us to identify areas of improvement for future reporting of ML in the field, in particular concerning the reproducibility of the models. Improving the reproducibility of ML models would allow independent research groups to externally validate the ML models on new datasets, to evaluate their generalizability and robustness to novel populations and settings . Only with external validations, ML models can be considered reliable and applicable in clinical contexts, as seen in other fields . The relatively small size of databases employed in the studies included in this review calls also for the need for more collaborative efforts in the development and validation of ML algorithms including PROM scores in oncology. Sharing data open access, following the FAIR principles , as well as using a standardized minimum set of PROMs in clinical oncology studies , would also allow researchers to quickly have access to plenty of large datasets for robust ML model development. Only with large and multicentric data reliable ML models can be developed and, possibly, applied in clinical practice soon. Based on the results of this systematic review, researchers should, at a bare minimum, use the MI-CLAIM checklist in future documentation of ML models. To further improve the quality of current ML research, journals might also consider asking their reviewers to check the quality of reporting of ML models by applying available study-specific reporting guidelines that are recognized by the scientific community (e.g., CONSORT-AI for randomized controlled trials, SPIRIT-AI for protocols,…) . Furthermore, due to the increasing number of applications of AI in medicine, medical journals should seek to involve engineers, computer scientists, or professionals who have a methodological understanding of ML in their editorial boards. Including respective professionals in the review process of these manuscripts would allow checking for the basic requirements of proper development and reporting of ML models. As an example, among the studies included in this systematic review, ten did not clearly prove the independence between training and testing data sets, which is a fundamental aspect of ensuring the quality of applied ML. The MI-CLAIM checklist has been developed with the aim of evaluating the reporting quality of ML algorithms and should not be therefore seen as a tool to evaluate the quality of the ML modelling itself. As the ML algorithms included in this systematic review address different research questions and use different datasets, a direct comparison of their quality in terms of classification performances was not possible. As an additional limitation, we also noticed that the interpretation of the MI-CLAIM checklist is not always objective and univocal. We found it particularly difficult to objectively evaluate the presence of a clear discussion of the reliability and robustness of the model (point 9 in the checklist). Furthermore, we excluded studies proposing unsupervised learning models, as some items of the MI-CLAIM checklist (Validation methodology, Train/test independence, and Performance metrics) were not applicable, as already reported in . One limitation of this review is that it relied on only two major databases for the literature search. While these databases were selected for their extensive coverage of relevant studies, including additional databases could have provided a broader scope and potentially captured additional relevant literature. With regulatory bodies starting to systematically emphasize the use of ML in medical research and drug development and the AI act of the European Union coming into effect , the quality of reporting of ML models including PROM scores needs to be prioritized. The herein performed critical examination of the status quo of ML models including PROM scores in oncology allowed us to identify areas of improvement for future reporting of ML in the field, in particular concerning the reproducibility of the models. Improving the reproducibility of ML models would allow independent research groups to externally validate the ML models on new datasets, to evaluate their generalizability and robustness to novel populations and settings . Only with external validations, ML models can be considered reliable and applicable in clinical contexts, as seen in other fields . The relatively small size of databases employed in the studies included in this review calls also for the need for more collaborative efforts in the development and validation of ML algorithms including PROM scores in oncology. Sharing data open access, following the FAIR principles , as well as using a standardized minimum set of PROMs in clinical oncology studies , would also allow researchers to quickly have access to plenty of large datasets for robust ML model development. Only with large and multicentric data reliable ML models can be developed and, possibly, applied in clinical practice soon. Based on the results of this systematic review, researchers should, at a bare minimum, use the MI-CLAIM checklist in future documentation of ML models. To further improve the quality of current ML research, journals might also consider asking their reviewers to check the quality of reporting of ML models by applying available study-specific reporting guidelines that are recognized by the scientific community (e.g., CONSORT-AI for randomized controlled trials, SPIRIT-AI for protocols,…) . Furthermore, due to the increasing number of applications of AI in medicine, medical journals should seek to involve engineers, computer scientists, or professionals who have a methodological understanding of ML in their editorial boards. Including respective professionals in the review process of these manuscripts would allow checking for the basic requirements of proper development and reporting of ML models. As an example, among the studies included in this systematic review, ten did not clearly prove the independence between training and testing data sets, which is a fundamental aspect of ensuring the quality of applied ML. Facing the increasing amount of available data, including PROM scores, with the potential to benefit both clinical research and practice, we lean on novel technology. ML algorithms are a promising tool to learn complex patterns and provide estimates based on PRO data. To meet the full potential of this technology in a thriving field and to ensure that ML models provide researchers, clinicians, and patients with valid and reliable results, transparent reporting is critical given their possible future integration into clinical practice. Below is the link to the electronic supplementary material. Supplementary Material 1
Implementing case-based collaborative learning curriculum via webinar in internal medicine residency training: A single-center experience
a353a5c0-f6b6-4237-8703-ae0ff4efca34
10118346
Internal Medicine[mh]
Case-based collaborative learning (CBCL) is a structured, student-centered approach that incorporates pre-session reading, readiness assessment, and interactive case-based sessions, and it has been shown to improve medical students’ knowledge. CBCL has also been used in resident training in a dermatology residency program. A previous study assessed the residents’ knowledge using content-related questions and surveyed the acceptance of CBCL. The results showed that CBCL improved the residents’ knowledge, and it was found superior to the traditional didactic teaching. Adding CBCL courses into standard residency training may improve the quality of education. At present, in China, standardized residency training of internal medicine is a uniform, 3-year program. The rotation in cardiology department is usually 4 months in length. The traditional resident training relies largely on bedside teaching, and classroom-based teaching in the format of didactic teaching, case discussion, and journal club. However, this training model has several limitations. Firstly, although bedside teaching provides opportunities for in-depth learning, the type and severity of cases vary, and the quality of rotation heterogeneously differs. Secondly, large tertiary educational hospitals may concentrate on certain areas of subspecialty, while the access to some other common topics of subspecialty may be limited. Thirdly, while didactic teaching mainly concentrate on simple knowledge transfer, its efficiency and acceptability among learners has been questioned. In regard to case discussion, despite regularly held in Chinese educational hospitals, it is usually more for clinical purposes, but not carefully designed and held to meet the educational needs of residents. Therefore, adding a systematically designed CBCL course with selected topics may fill in this gap. However, during the coronavirus disease 2019 (COVID-19) pandemic in early 2020, it is not recommended to gather indoors. Webinars have become an reliable means of delivering courses, and have been proven to be a valuable teaching method that can fulfill a variety of educational needs, such as resident training, continuous medical education, patient education, and more. The use of webinar has been drastically increased in the COVID pandemic, as indicated by number of publications. Webinars provide the opportunity for remote meetings, save time that would otherwise be spent commuting, and are particularly beneficial for residents who may be post-call or unable to leave clinical areas. In this present study, we aimed to test the influence of the CBCL curriculum in webinar format on quality of residency training and residents’ satisfaction. 2.1. Patient and public involvement statement Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research. 2.2. Study design Between February and April 2020, we implemented 8 CBCL sessions in webinar format in an internal medicine residency training program in Beijing Tsinghua Changgung Hospital, Beijing, China. In total, 9 residents were invited to participate in the study and were recruited on a voluntary basis. At baseline, residents participated in learning activities about 5 h/wk, including didactic teaching, case discussion, and journal club, which were delivered by peers or faculty members. No preexisting session used any group discussion, including CBCL. 2.3. Curriculum development and implementation The need assessment for the curriculum development included a survey to faculty regarding the knowledge gap of rotating residents and analysis of end-of-rotation examination result from the training program. A total of 8 topics were chosen: angina pectoris, acute myocardial infarction, heart failure, hypertension, atrial fibrillation, infective endocarditis, cardiomyopathy, and myocarditis. Under these topics, cases with typical presentations were prepared and promoted for discussion. The learning objective was to improve residents’ knowledge on these topics. 1.2.3. Reading materials and readiness assessment. Pre-session reading materials included lecture slides, the latest local guidelines, and guidelines published by the American College of Cardiology/American Heart Association and the European Society of Cardiology. A readiness assessment with 10 to 20 multiple choice questions (MCQs) was conducted online 1 day prior to the session. Answers and detailed explanations were released immediately upon completion of the questions to consolidate the relevant concepts and knowledge. Notification messages were sent weekly to leaners to remind them about the readiness assessment prior to the webinar session. 2.2.3. The CBCL sessions in webinar format. The structure of the discussion session fulfilled the principle of the CBCL teaching model, combining small-group and large-group discussions, while in an online manner using Tencent Meeting software. Residents were divided into 3 groups during small-group discussion using WeChat software. Residents were instructed to keep video on and audio off when they were silent. However, they could turn on the audio for questioning and speaking at any time. Residents were encouraged to lead the discussion on most occasions. Two faculty members facilitated the entire session and summarized the answers to the questions. Residents were required to submit a summary report after each session. 2.4. Outcomes Fifty MCQs were delivered to assess residents’ knowledge before and after the curriculum. The MCQs were randomly selected from an online standardized question bank developed previously by hospital faculty for the assessment of internal medicine residents. Difficulty levels were balanced, and 80% had 1 correct answer while 20% had multiple correct answers. Two surveys, at the end of the second session and the last session, were delivered to residents to assess their satisfactory on CBCL. The surveys included 5-point Likert scale scores to evaluate residents’ attitudes toward CBCL, self-assessed improvement, satisfaction of case selection, satisfaction of teaching method, and their attitude regarding their participation in similar courses in the future. 2.5. Statistical analysis Changes in knowledge were assessed using the paired t test to compare the mean values of the MCQ scores before and after the curriculum. The Wilcoxon signed-rank test was used to compare 5-point Likert scale scores obtained from the 2 surveys. The statistical analysis was performed using SPSS 23.0 software (IBM, Armonk, NY). Patients or the public were not involved in the design, or conduct, or reporting, or dissemination plans of our research. Between February and April 2020, we implemented 8 CBCL sessions in webinar format in an internal medicine residency training program in Beijing Tsinghua Changgung Hospital, Beijing, China. In total, 9 residents were invited to participate in the study and were recruited on a voluntary basis. At baseline, residents participated in learning activities about 5 h/wk, including didactic teaching, case discussion, and journal club, which were delivered by peers or faculty members. No preexisting session used any group discussion, including CBCL. The need assessment for the curriculum development included a survey to faculty regarding the knowledge gap of rotating residents and analysis of end-of-rotation examination result from the training program. A total of 8 topics were chosen: angina pectoris, acute myocardial infarction, heart failure, hypertension, atrial fibrillation, infective endocarditis, cardiomyopathy, and myocarditis. Under these topics, cases with typical presentations were prepared and promoted for discussion. The learning objective was to improve residents’ knowledge on these topics. 1.2.3. Reading materials and readiness assessment. Pre-session reading materials included lecture slides, the latest local guidelines, and guidelines published by the American College of Cardiology/American Heart Association and the European Society of Cardiology. A readiness assessment with 10 to 20 multiple choice questions (MCQs) was conducted online 1 day prior to the session. Answers and detailed explanations were released immediately upon completion of the questions to consolidate the relevant concepts and knowledge. Notification messages were sent weekly to leaners to remind them about the readiness assessment prior to the webinar session. 2.2.3. The CBCL sessions in webinar format. The structure of the discussion session fulfilled the principle of the CBCL teaching model, combining small-group and large-group discussions, while in an online manner using Tencent Meeting software. Residents were divided into 3 groups during small-group discussion using WeChat software. Residents were instructed to keep video on and audio off when they were silent. However, they could turn on the audio for questioning and speaking at any time. Residents were encouraged to lead the discussion on most occasions. Two faculty members facilitated the entire session and summarized the answers to the questions. Residents were required to submit a summary report after each session. Pre-session reading materials included lecture slides, the latest local guidelines, and guidelines published by the American College of Cardiology/American Heart Association and the European Society of Cardiology. A readiness assessment with 10 to 20 multiple choice questions (MCQs) was conducted online 1 day prior to the session. Answers and detailed explanations were released immediately upon completion of the questions to consolidate the relevant concepts and knowledge. Notification messages were sent weekly to leaners to remind them about the readiness assessment prior to the webinar session. The structure of the discussion session fulfilled the principle of the CBCL teaching model, combining small-group and large-group discussions, while in an online manner using Tencent Meeting software. Residents were divided into 3 groups during small-group discussion using WeChat software. Residents were instructed to keep video on and audio off when they were silent. However, they could turn on the audio for questioning and speaking at any time. Residents were encouraged to lead the discussion on most occasions. Two faculty members facilitated the entire session and summarized the answers to the questions. Residents were required to submit a summary report after each session. Fifty MCQs were delivered to assess residents’ knowledge before and after the curriculum. The MCQs were randomly selected from an online standardized question bank developed previously by hospital faculty for the assessment of internal medicine residents. Difficulty levels were balanced, and 80% had 1 correct answer while 20% had multiple correct answers. Two surveys, at the end of the second session and the last session, were delivered to residents to assess their satisfactory on CBCL. The surveys included 5-point Likert scale scores to evaluate residents’ attitudes toward CBCL, self-assessed improvement, satisfaction of case selection, satisfaction of teaching method, and their attitude regarding their participation in similar courses in the future. Changes in knowledge were assessed using the paired t test to compare the mean values of the MCQ scores before and after the curriculum. The Wilcoxon signed-rank test was used to compare 5-point Likert scale scores obtained from the 2 surveys. The statistical analysis was performed using SPSS 23.0 software (IBM, Armonk, NY). A total of 9 residents participated in the CBCL curriculum in webinar format, of whom 3 (33.3%) residents were male. Among them, 6 residents were postgraduate year one (PGY-1), two were PGY-2 and one was PGY-3. Participants’ demographic characteristics and course details are shown in Table . The majority of residents had inpatient cardiology rotation experience (0.9 ± 1.2 months). The overall course attendance rate was 94.4%. The average time for CBCL study was 6.3 ± 4.1 h/wk. Pre-session learning time was 4.1 ± 2.1 h/wk. Post-session summary lasted for 2.2 ± 2.3 h/wk. 3.1. Changes in knowledge All residents completed the pre- and post-curriculum assessments with 50 MCQs. The total score was 100 with dedication of 2 points for each question. The mean scores were 68.0 ± 12.3 and 75.1 ± 9.9, respectively ( P = .029). 3.2. Satisfactory assessment Survey results for satisfactory assessment are presented in Figure . All 9 residents responded to the surveys (response rate, 100%). Among them, 5 (55.6%) residents selected “like” or “extremely like” in overall satisfaction at week 2, while in the second survey conducted at week 8, the number increased to 8 (88.9%). In terms of self-assessed improvement, most students ranked positively at week 2 (6, 66.7%) and week 8 (8, 88.9%). In both surveys, the majority of participants (88.9% and 100%) reported satisfaction with the case used for the teaching. When queried about their attitude toward the teaching methodology, 6 (66.7%) residents responded positively at week 2 and 7 (77.8%) at week 8. Only 4 (44.4%) residents agreed to participate in similar courses in the future in the first survey, and the number improved to 7 (77.8%) at week 8. To compare the results from the 2 surveys, the answers in the 5-point Likert scale scores were graded with values ranging from 1 for “extremely dislike/disagree” to 5 for “extremely like/agree.” The higher score indicates a higher level of satisfaction. The median scores from the initial survey were 4 for overall attitude, self-evaluated improvement in clinical reasoning abilities, satisfaction with case selection, and satisfaction with teaching methods. The participants gave a median score of 3 for agreement on future participation. The results of the median scores from this survey indicated overall satisfaction of the residents with the course. Additionally, the repeated survey at week 8 remained consistent with the first survey, and even showed a tendency of improvement on overall attitude and agreement on future participation (Fig. ). All residents completed the pre- and post-curriculum assessments with 50 MCQs. The total score was 100 with dedication of 2 points for each question. The mean scores were 68.0 ± 12.3 and 75.1 ± 9.9, respectively ( P = .029). Survey results for satisfactory assessment are presented in Figure . All 9 residents responded to the surveys (response rate, 100%). Among them, 5 (55.6%) residents selected “like” or “extremely like” in overall satisfaction at week 2, while in the second survey conducted at week 8, the number increased to 8 (88.9%). In terms of self-assessed improvement, most students ranked positively at week 2 (6, 66.7%) and week 8 (8, 88.9%). In both surveys, the majority of participants (88.9% and 100%) reported satisfaction with the case used for the teaching. When queried about their attitude toward the teaching methodology, 6 (66.7%) residents responded positively at week 2 and 7 (77.8%) at week 8. Only 4 (44.4%) residents agreed to participate in similar courses in the future in the first survey, and the number improved to 7 (77.8%) at week 8. To compare the results from the 2 surveys, the answers in the 5-point Likert scale scores were graded with values ranging from 1 for “extremely dislike/disagree” to 5 for “extremely like/agree.” The higher score indicates a higher level of satisfaction. The median scores from the initial survey were 4 for overall attitude, self-evaluated improvement in clinical reasoning abilities, satisfaction with case selection, and satisfaction with teaching methods. The participants gave a median score of 3 for agreement on future participation. The results of the median scores from this survey indicated overall satisfaction of the residents with the course. Additionally, the repeated survey at week 8 remained consistent with the first survey, and even showed a tendency of improvement on overall attitude and agreement on future participation (Fig. ). In the present study, we assessed the effects of the CBCL curriculum in webinar format on internal medicine residents’ knowledge and their attitudes toward this teaching module. We found that the CBCL sessions in webinar format was associated with improved knowledge mastery on cardiovascular diseases as evidenced by improved MCQs scores. The teaching module was relatively well accepted by residents, and the acceptance rate showed some improvement along with the conduction of the curriculum. Collectively, these results demonstrated that the CBCL sessions in webinar format could be advantageous for training internal medicine residents. CBCL uses a flipped-classroom model, integrating the elements of case-based learning and problem-based learning. The flipped-classroom model has been adopted by several medical centers worldwide. Allenbaugh et al tested the flipped classroom curriculum in cardiology residency training. They randomized 98 internal medicine residents into flipped classroom curriculum group (with weekly case discussion) and control group. Despite positive perceptions of the flipped classroom curriculum group, the survey found no significant difference in residents’ knowledge, attitudes or preparedness. The overall results were highly consistent with those of other studies, and highlighted the different effects of flipped classroom curriculum on training postgraduate and undergraduate medical students. The authors attributed the differences to residents (i.e., residents facing difficulty in balancing their limited time on work responsibilities and pre-session readings). As a novel teaching method, CBCL lacks robust data for cardiology residency training. It may have similar difficulty in engaging residents in the same settings. However, study from Krupat et al has demonstrated the advantages of CBCL was described to be “engaging,” “fun” and “thought-provoking” by students. Furthermore, we minimized the pre-session reading materials to reduce the study load in our curriculum. The survey showed that the overall time needed per week was 6.3 ± 4.1 hours, which seemed to be acceptable, according to the positive feedback obtained from residents and overall improvement in their performance. The adoption of CBCL with minimized study load might be essential in maintaining participants willingness to study. CBCL was originally designed as a classroom-based group discussion in the same physical space with involvement of each small group sitting around a table. This current CBCL curriculum was presented in webinar format particularly for the reason of COVID-19 pandemic, to reduce the commuting time, especially for post-call residents, and to improve the attendance rate. This advantage has been well reflected in the high attendance rate (94.4%) of the course. Using the CBCL curriculum in webinar format may also provide other benefits. We found that leaners who did not speak loudly in the public expressed their thoughts in the chat box during the discussion, which encouraged them to communicate with peers and tutors. Furthermore, the CBCL curriculum in webinar format provided the possibility of involving tutors and learners from different geographic locations. Establishment of a standardized CBCL curriculum in webinar format may be advantageous to integrate educational resources between different centers and improve the homogeneity in residency training even in the post-pandemic era. However, there are still some disadvantages of the CBCL sessions in webinar format. For instance, there is a concern regarding impaired engagement when leaners are learning separately. This drawback is of great importance when didactic lectures are delivered. CBCL has been thought to be engaging, as the nature of active thinking and communication it provoked during the process. Thus, the CBCL sessions in webinar format enabled residents to actively learn, and maintained the advantages of the active learning strategies, such as the improved knowledge mastery and a high acceptance rate, which were also evident in our study. In addition, hardware- and software-related concerns are noteworthy. A pretest to enhance the quality of the CBCL sessions in webinar format is essentially warranted to the smooth conduction of the course. There were several limitations in the present study. First, it was a single-center study and only involved residents in the internal medicine training program, which may limit the generalizability of the findings. Second, the course assessment was conducted using MCQs and lacked a control group for making comparison. The surveys to investigate satisfaction of learners may carry confounder bias as the authors are also faculty from the same hospital. Third, the number of participants was small, which hindered us to perform a randomized trial, and only allowed for a preliminary study. An initially planned assessment at 6 months after the course was interrupted by the pandemic and was not able to conduct with the completion of training of these residents. A study with more participants and longer follow-up period is now conducting for further investigation. Implementing the CBCL curriculum in webinar format for cardiology residents was resulted in the improved knowledge mastery and a high acceptance rate. Conceptualization: Rong He. Data curation: Ying Xie, Fang Liu, Ou Zhang, Wei Xiang, Le Miao. Formal analysis: Wei Xiang, Le Miao, Ping Zhang. Funding acquisition: Rong He. Investigation: Lanting Zhao, Lingyun Kong. Methodology: Rong He, Ying Xie. Project administration: Ying Xie. Resources: Fang Liu, Ou Zhang. Supervision: Lanting Zhao, Lingyun Kong. Writing – original draft: Rong He, Ying Xie. Writing – review & editing: Ping Zhang.
Demonstrating the presence of
717c27e4-0668-4b2f-9eae-fc702049d71b
7561240
Pathology[mh]
Canine monocytic ehrlichiosis (CME) is caused by Ehrlichia canis , an intracellular parasitic bacterium and tick-borne pathogen. Recently, this pathogen has received further attention because it has led to increasing morbidity and mortality in animals . Transmission is mediated by the tick Rhipicephalus sanguineus ( sensu lato ), and, before infection, the bacteria replicate in monocytes and macrophages . Clinical presentation of CME results in acute, chronic or subclinical phases, with several clinical manifestations. The acute phase persists for 2–4 weeks and is characterised by signs in diverse systems, yet the most common are fever, weight loss, anorexia, depression, lymphadenomegaly, splenomegaly and vasculitis . In addition, dogs in this phase show thrombocytopenia as the most common laboratory abnormality . In the subclinical phase, dogs have persistent thrombocytopenia and leukopenia in laboratory analysis; however, during this stage, in some dogs the thrombocytopenia may be mild to non-existent , and they usually do not show clinical signs. The duration of this phase varies from months to years . Additionally, during this phase it is common that the microorganism may not circulate in the blood but is deposited in some target organ, such as the spleen, bone marrow or liver . Furthermore, previous research has shown that E. canis is widely distributed in different organs of infected dogs . Otherwise, in the chronic phase dogs show severe pancytopenia, haemorrhagic diathesis, and general debilitation . Immune system deficiency, stress, co-infections, virulence strain, and geographical region are factors that influence the presentation of this phase in affected dogs . In recent times, diagnosis of the disease has been challenging for practicing veterinarians . Identification of morulae in monocytes in a blood smear is diagnostic of the disease; however, a low frequency of morulae in buffy coat smears has previously been reported, which could be due to the low parasitaemia observed in the natural infection . Besides, other more specific methods are used as diagnostics, including the immunofluorescence antibody test (IFA) and ELISA (enzyme-linked immunosorbent assay), which are both able to detect specific antibodies , as well as other molecular techniques such as the polymerase chain reaction (PCR) . Presently, the Infectious Disease Group of the American College of Veterinary Internal Medicine (ACVIM) requires that dogs diagnosed with this disease show suggestive clinical signs and have positive tests, either by serology and/or by PCR . A complication in the diagnosis comes about in dogs in the subclinical phase of the disease because dogs normally do not show clinical signs. Furthermore, cross-reactivity and a failure to differentiate between current and past infections with ELISA and IFA tests has been reported . On the other hand, both in the subclinical and chronic phases, there is a possibility that parasitaemia is low in the dog , as the bacteria are located in the target organs . Therefore, in these cases, the dogs will be negative in a PCR blood test . Presently, the presence of the DNA of E. canis in several tissues, such as blood, bone marrow, spleen, liver, kidney and lymph nodes has been demonstrated by PCR in experimentally infected dogs . The goal of this study was to evaluate the occurrence of E. canis in different tissues, such as liver, spleen, lymph nodes and bone marrow, in dogs naturally infected with monocytic ehrlichiosis, assuming that a considerable percentage of dogs negative to E. canis by blood PCR will show positive results in biopsies of different tissues. An analysis of the variation in infection by E. canis in four tissues was carried out in two groups of dogs: positive and negative by PCR of blood samples. Animals Fifty-nine dogs obtained from the municipal Anti-Rabies Centre of Juárez were used in this study. Based on the Centre’s internal regulations, animals that were not adopted 8 weeks after their arrival were euthanised. Euthanasia was performed by an overdose of sodium pentobarbital according to national and international animal welfare regulations. In order to increase the possibility that dogs will present the subclinical phase of the disease, the inclusion criteria were that the dogs should have ticks but be clinically healthy; therefore, dogs without ticks or with signs of any disease were excluded. Sample collection Whole blood samples were collected in tubes containing EDTA (Vacutainer BD ® , Mexico City, Mexico) by cephalic venepuncture with prior administration of sodium pentobarbital. The other tissue samples were acquired by biopsies immediately after euthanasia, following the steps of surgical asepsis in order to prevent cross-contamination. In addition, with the same purpose, a change of instruments was made for the biopsy of each tissue, and particular attention was taken to avoid blood or other fluid from the dog coming into contact with the tissue samples. Bone marrow aspirates were obtained with bone marrow aspiration needles (Argon Medical Devices ® , Dallas, TX, USA) from the greater tubercle of the humerus, as described by Raskin & Messickin . Hepatic and splenic biopsies were obtained by celiotomy and with the ligature fracture technique . Finally, prescapular lymph node were biopsied with a biopsy punch (Premier ® , Plymouth Meeting, PA, USA) as previously described . Tissues samples were marked and frozen at − 20 °C for future extraction of DNA and PCR analysis. Biopsies obtained from spleen, liver and lymph node had an average weight of 200 mg (range 150–210 mg). The amount of whole blood obtained was 1.5 ml and the bone marrow biopsy obtained 0.6 ml on average (range 0.4–0.7 ml) DNA extraction For the blood samples, the extraction of genomic DNA from the cellular package of the dogs’ samples was performed using the UltraClean Blood DNA Isolation Kit (MoBio Lab®, Carlsbad, CA, USA), according to the manufacturer’s instructions. The other tissues were handled in a sterile fashion prior to the extraction of DNA. For the extraction of DNA from the biopsies, the protocol was modified with the previous addition of lysis reagents . The tissues were then macerated with the use of a low-velocity drill (Jorvet Lab ® , Loveland, CO, USA) and a dental burn (JOTA Technical®, Rüthi, Switzerland). Once each tissue was macerated, DNA extraction was performed in the same way as for the blood. PCR amplification and analysis Detection of E. canis DNA was achieved with the use of nested PCR molecular test. Initially, to amplify the Ehrlichia spp. 16S rRNA gene, 2 pmol of primers ECC (5′′-AGA ACG AAC GCT GGC GGC CAA GC-3′) and ECB (5′-CGT ATT ACC GCG GCT GCT-3′) were used . In the second PCR, to amplify the E. canis 16S rRNA gene, 2 pmol of primer HE-3(5′-TAT AGG TAC CGT CAT TAT CTT CCC TAT-3′) combined with the reverse primer ECA (5′-CAA TTA TTT ATA GCC TCT GGC TAT AGG AA-3′) were used . Initially, the PCR was performed in a thermocycler (Bio-Rad ® C-1000 Touch, Hercules, CA, USA) starting at 94 °C for 1 min followed by 35 cycles of 94 °C for 1 min (denaturation), 60 °C for 1 min (hybridisation) and 72 °C for 3 min (extension). This was followed by 94 °C for 5 min and then 40 cycles of 94 °C for 1 min (denaturation), 60 °C for 1 min (hybridisation), and 72 °C for 1 min (extension), as described previously . Statistical analyses A multivariate logistic regression model was used for the response variable ‘infection’ which was binary (dummy variable) with y = 1 if positive, and y = 0 if negative, depending on two explanatory variables: blood positivity (two levels) and positivity in four separate tissues (four levels). Therefore, the model was: infection = blood + tissue + error. The model analysed separately infection in both groups of dogs. In each group, the model compared infection among the four tissues using statistical tests ‘z’ between pairs of tissues, using a multiple-comparison Scheffe test. Comparison of the proportions of positive and negative results in blood, lymph node, liver and spleen samples were performed using Chi square and Fisher’s exact tests with the FREQ procedure of SAS (9.0). Significance was considered with a P -value of < 0.05. Fifty-nine dogs obtained from the municipal Anti-Rabies Centre of Juárez were used in this study. Based on the Centre’s internal regulations, animals that were not adopted 8 weeks after their arrival were euthanised. Euthanasia was performed by an overdose of sodium pentobarbital according to national and international animal welfare regulations. In order to increase the possibility that dogs will present the subclinical phase of the disease, the inclusion criteria were that the dogs should have ticks but be clinically healthy; therefore, dogs without ticks or with signs of any disease were excluded. Whole blood samples were collected in tubes containing EDTA (Vacutainer BD ® , Mexico City, Mexico) by cephalic venepuncture with prior administration of sodium pentobarbital. The other tissue samples were acquired by biopsies immediately after euthanasia, following the steps of surgical asepsis in order to prevent cross-contamination. In addition, with the same purpose, a change of instruments was made for the biopsy of each tissue, and particular attention was taken to avoid blood or other fluid from the dog coming into contact with the tissue samples. Bone marrow aspirates were obtained with bone marrow aspiration needles (Argon Medical Devices ® , Dallas, TX, USA) from the greater tubercle of the humerus, as described by Raskin & Messickin . Hepatic and splenic biopsies were obtained by celiotomy and with the ligature fracture technique . Finally, prescapular lymph node were biopsied with a biopsy punch (Premier ® , Plymouth Meeting, PA, USA) as previously described . Tissues samples were marked and frozen at − 20 °C for future extraction of DNA and PCR analysis. Biopsies obtained from spleen, liver and lymph node had an average weight of 200 mg (range 150–210 mg). The amount of whole blood obtained was 1.5 ml and the bone marrow biopsy obtained 0.6 ml on average (range 0.4–0.7 ml) For the blood samples, the extraction of genomic DNA from the cellular package of the dogs’ samples was performed using the UltraClean Blood DNA Isolation Kit (MoBio Lab®, Carlsbad, CA, USA), according to the manufacturer’s instructions. The other tissues were handled in a sterile fashion prior to the extraction of DNA. For the extraction of DNA from the biopsies, the protocol was modified with the previous addition of lysis reagents . The tissues were then macerated with the use of a low-velocity drill (Jorvet Lab ® , Loveland, CO, USA) and a dental burn (JOTA Technical®, Rüthi, Switzerland). Once each tissue was macerated, DNA extraction was performed in the same way as for the blood. Detection of E. canis DNA was achieved with the use of nested PCR molecular test. Initially, to amplify the Ehrlichia spp. 16S rRNA gene, 2 pmol of primers ECC (5′′-AGA ACG AAC GCT GGC GGC CAA GC-3′) and ECB (5′-CGT ATT ACC GCG GCT GCT-3′) were used . In the second PCR, to amplify the E. canis 16S rRNA gene, 2 pmol of primer HE-3(5′-TAT AGG TAC CGT CAT TAT CTT CCC TAT-3′) combined with the reverse primer ECA (5′-CAA TTA TTT ATA GCC TCT GGC TAT AGG AA-3′) were used . Initially, the PCR was performed in a thermocycler (Bio-Rad ® C-1000 Touch, Hercules, CA, USA) starting at 94 °C for 1 min followed by 35 cycles of 94 °C for 1 min (denaturation), 60 °C for 1 min (hybridisation) and 72 °C for 3 min (extension). This was followed by 94 °C for 5 min and then 40 cycles of 94 °C for 1 min (denaturation), 60 °C for 1 min (hybridisation), and 72 °C for 1 min (extension), as described previously . A multivariate logistic regression model was used for the response variable ‘infection’ which was binary (dummy variable) with y = 1 if positive, and y = 0 if negative, depending on two explanatory variables: blood positivity (two levels) and positivity in four separate tissues (four levels). Therefore, the model was: infection = blood + tissue + error. The model analysed separately infection in both groups of dogs. In each group, the model compared infection among the four tissues using statistical tests ‘z’ between pairs of tissues, using a multiple-comparison Scheffe test. Comparison of the proportions of positive and negative results in blood, lymph node, liver and spleen samples were performed using Chi square and Fisher’s exact tests with the FREQ procedure of SAS (9.0). Significance was considered with a P -value of < 0.05. Of the 59 dogs analysed in this study, 28 (47.45%) showed a positive result for E. canis by PCR of blood samples, and 31 (52.55%) were negative. When evaluating the 28 dogs that were positive by PCR of blood samples, it was observed that 16 (57.14%) were also positive by PCR of some of the tissues. Otherwise, when analysing dogs with negative PCR results in blood ( n = 31) and comparing them with the results of PCR in different tissues of the same dogs, it was observed that 19 dogs (61.30%) presented positive results for E. canis in some of the tissues and 12 (38.70%) were negative in all tissues biopsied. The tissue biopsy with the highest number of positive samples was the bone marrow, with 26 (44.60%). Positive results from bone marrow samples occurred in both positive and negative blood samples. For example, 10 dogs (35.71%) that were positive by PCR of blood samples were also positive in PCR of the bone marrow (Table ). Furthermore, it was found that 12 of 19 cases (63.15%) were positive with negative PCR of blood samples (Table ). In half of the negative cases ( n = 6), the results of the PCR of other tissues were negative. Conversely, in two cases, the PCR was positive for all tissues analysed. The tissue with the second highest number of positive results was the spleen, with a prevalence of 42.37% ( n = 25). When analysing PCR-positive blood samples, 16 samples (57.14%) were also positive in PCR of spleen (Table ). In blood PCR-negative dogs, the splenic tissue showed 9 (47.36%) positive PCR results, although there were spleen-only positive samples on two occasions (Table ). Also, on two occasions the PCR was positive for all the tissues analysed. The remaining of the combinations are presented in Tables and . The liver had 22 PCR-positive cases (37.28%) from the total samples evaluated. Of the PCR-positive blood samples, 12 (42.85%) were also positive for the liver tissue (Table ). Similarly, with the spleen, of the 19 PCR-negative blood samples, 10 (52.63%) were positive for the liver tissue. In the negative blood samples, there was one liver-only positive result (Table ). In addition, the PCR was positive in all tissues twice. Finally, the tissue with the fewest positive results in the study was the lymph node, with 5 cases (8.47%). In the PCR-positive blood samples, only 2 cases were positive (10.52%; Table ). On the other hand, the blood samples negative by PCR were positive for lymphatic tissue in 3 cases, representing 15.78%. In none of these three cases was the lymph node the only tissue with positive results (Table ). Considering infection in the four tissues, the infection rate was the same in both negative and positive dogs in PCR of blood samples ( P > 0.05). The infection in tissues of negative dogs was an average rate of 0.23 ± 0.05, and for positive dogs was 0.35 ± 0.04 ( df = 233, P < 0.001). In the present study, of the 59 clinically healthy dogs analysed, 47.45% had a positive result for E. canis with PCR of blood samples. In addition, PCR recognised a higher prevalence of E. canis in different tissues of naturally infected dogs, in those with both positive and negative results by PCR of blood samples. With these results it was demonstrated that some dogs suspected of presenting subclinical ehrlichiosis, presented E. canis DNA in various tissues, even though they had negative PCR results from blood. At the present time, diagnosis by PCR is more useful than serology for the differentiation of concurrent infections and co-infections with diverse Ehrlichia spp. and is used for treatment monitoring . However, in naturally-occurring CME, the diagnostic sensitivity and optimal tissue for PCR testing in the untreated dog or in the post-treatment setting has not yet been clarified . Results obtained at this point demonstrate that in dogs with naturally-occurring CME infection it is feasible to detect E. canis in different tissues, even if they have negative blood tests. Additionally, in the acute phase of infection, E. canis is easily detected in blood, while in the subclinical and chronic phases there is the possibility of false negatives. Therefore, some tissues are more appropriate for sampling, such as the bone marrow and the spleen , an argument that has been corroborated by the present investigation. This study does not suggest performing tissue PCR for routine diagnosis of CME in dogs because performing biopsies in dogs with no clinical signs is impractical. However, sampling tissues may be relevant in understanding the distribution of CME in dogs. Comparative information on the spread and presence of E. canis by PCR analysis in multiple organs is limited, especially in dogs with the natural form of the disease, although some research has been done in experimentally inoculated dogs. For example, it is proven that PCR is effective in detecting E. canis in diverse tissues of dogs with experimental disease . In the same way, it has been described that the spleen is a tissue that can be useful to demonstrate the presence of E. canis DNA by PCR . In addition, the possibility of dogs in the subclinical phase being negative to PCR in blood samples and positive to PCR of splenic aspirates has also been established . Splenic aspirates have previously been performed to detect E. canis DNA by PCR. Previous research has shown that dogs that were blood-positive were also positive to splenic aspirates, compared to those that were negative in blood . These results differ from those obtained in the present investigation, where a prevalence of 42.37% ( n = 25) was obtained. Furthermore, of the 19 blood PCR-negative dogs, nine (47.36%) were positive by PCR in the splenic biopsies. It has been revealed that in the acute phase of disease, splenic aspirates are not superior to blood samples for detection of ehrlichial DNA by PCR. However, splenic aspirates are superior to blood in the evaluation of the response to therapy in experimentally treated dogs, because E. canis DNA could be detected in the spleen after its elimination from the blood . The results of the present study also differ from previous reports in which the number of dogs positive and negative for E. canis by PCR is similar in blood samples and splenic aspirates. The results revealed that DNA of E. canis was isolated in 29 (72.5%) spleen samples and in 30 (75%) whole blood samples; and ehrlichial DNA was not isolated in 11 (27.5%) spleen samples and in 10 (25%) whole blood samples . The difference between the other studies and the present investigation is the spleen tissue analysed. In our study, DNA was obtained through splenic biopsy, whereas in others DNA was obtained from blood through splenic aspirates. In another investigation, it was found that out of 78 dogs with splenic disease, only one was positive for E. canis by PCR in a splenic biopsy . The present study creates the expectation of performing research to establish the most suitable technique to obtain E. canis DNA from the spleen in dogs by comparing splenic aspirates with biopsies, including those taken with minimally invasive techniques, such as ultrasound-guided or laparoscopic methods. Furthermore, another important difference in our study is that the tissue with the highest number of positive samples was the bone marrow, in contrast to a previous report that obtained more positives from aspirates of the spleen . Nevertheless, other studies have demonstrated that other tissues besides the spleen are better in detecting E. canis by PCR. For example, some authors describe results similar to those obtained in the present study and show that E. canis DNA was most often amplified from bone marrow . But, in these cases, there was experimental disease, and PCR was performed using aspirates. On the other hand, in one study on biopsies of dog cadavers, contrary to the results of the present study, none of the bone marrow biopsies was positive for E. canis by PCR . An important limitation of the present study was the absence of blood analysis, especially blood counts. This could have established in a more accurate way the dogs presenting with the subclinical phase of monocytic ehrlichiosis . However, it can be assumed that positive dogs were in this phase, since they were clinically healthy. Ehrlichia canis is widespread throughout the different body systems of infected dogs. In addition, the molecular detection of E. canis DNA has shown that it can be present in different target organs . In the subclinical and chronic phases, E. canis could be ‘hiding’ in splenic macrophages . In this case, the spleen may be the principal reservoir of E. canis , probably because it has an abundance of macrophages. Moreover, some studies propose that it is the last organ to contain the microorganism before its elimination . Therefore, when containing a large number of bacteria, the spleen is considered by some authors as the organ of choice for molecular detection in different phases of the disease . Although in our study E. canis DNA was detected in the spleen, our results differ slightly from this statement, since it was the third most affected organ, surpassed by the bone marrow and liver. However, our results are similar to those of other studies that suggested that the spleen was inferior when compared to other tissues . In conclusion, results of this study could be applicable in some cases where the diagnostic sensitivity of PCR may be suboptimal . In some special cases, it will be necessary to search for E. canis DNA in different organs by molecular methods. In this study we have demonstrated that although infection in organs was 30% lower in dogs negative by PCR on blood samples, a considerable number of dogs ( n = 19 or 61.30%) showing negative results by blood PCR were positive for E. canis in some organs. Dogs with positive blood results were positive in three tissues (liver, bone marrow and spleen) in 48% of cases. At the same time, these three tissues were more positive than the lymph node, which was positive in only 8% of the samples evaluated, and was four times lower than in any of the other three tissues. Dogs with negative results in blood showed 33% detection of E. canis DNA in the spleen, liver and bone marrow; however, the presence of DNA was higher in liver and bone marrow than in the lymph node. Because in some cases DNA was detected in only one of these tissues, it is proposed that biopsies be performed of at least these three. This assertion is stipulated for other rickettsial diseases, such as Anaplasma spp., where blood samples are routinely used for screening, but in persistently infected dogs with intermittent or low-level bacteraemia other tissues might be useful . The results open the possibility of performing similar research aimed at detecting E. canis by PCR of different tissues in treated dogs that continue to show signs or alterations in blood tests, as well as in dogs that show signs suggestive of the disease but have negative results in serological and molecular blood analyses.
Applying polygenic risk score methods to pharmacogenomics GWAS: challenges and opportunities
071cdf2d-430d-4c16-ba8b-68c609af9cca
10782924
Pharmacology[mh]
Polygenic risk scores (PRSs) have recently emerged as promising tools in disease genome-wide association studies (GWAS) for predicting human diseases and complex traits. This is particularly important for diseases/complex traits with polygenic genetic architectures, where many genetic variants have small but genuine effects that do not reach the genome-wide significant threshold . A PRS combines multiple single nucleotide polymorphisms (SNPs) into a single aggregated score that can be used to predict disease risk. It is an individual-level score calculated based on the number of risk variants that a person carries, weighted by SNP effect sizes that are derived from an independent large-scale discovery GWAS. Thus, this score represents the total genetic risk of a specific individual for a particular trait, which can be used for clinical prediction or screening. To date, many PRSs have been successfully used in disease risk prediction and population stratification. For example, Khera et al . developed a PRS for coronary artery disease (CAD), where the PRS-high group (i.e. the top 8.0% of the population) inherited [12pt]{minimal} $$ 3-fold increased risk for CAD. Mavaddat et al . built a PRS that was optimized for predicting estrogen receptor (ER)-specific disease. Individuals with the highest 1% of PRS had 4.37-fold (increased) risk of developing ER-positive disease, whereas those with the lowest 1% of PRS had 0.16-fold (decreased) risks. In contrast to disease genetic studies, pharmacogenomics (PGx) studies explore how genetic variation influences drug responses, including drug metabolism, efficacy and toxicity, with the ultimate goal of improving and personalizing drug therapy. Such influence is usually via alterations in a drug’s pharmacokinetics (PK, i.e. absorption, distribution, metabolism, elimination) or via modulation of a drug’s pharmacodynamics (PD, i.e. modifying a drug’s target or perturbing biological pathways that alter sensitivity to the drug’s pharmacological effects) . Like many complex traits, most drug responses in PGx are extremely polygenic . For example, Muhammad et al . showed that the six PD and five PK phenotypes they studied were highly heritable, and the majority of the heritability was explained by small-effect and moderate-effect variants instead of large-effect variants, which demonstrates the potential for using PRS approaches in the clinic to improve prediction of PD/PK phenotypes to fulfill the promise of precision medicine . There are some emerging PRS applications in PGx studies published in most recent years. Zhang et al . developed a PRS from schizophrenia GWAS summary statistics reported by the Psychiatric Genomics Consortium for schizophrenia risk and showed that patients with higher PRSs tended to have less improvement with antipsychotic drug treatment. Similarly, Damask et al . found that both the absolute and relative reduction rates of major adverse cardiovascular events by alirocumab treatment compared with placebo were greater in patients with higher PRS in the ODYSSEY OUTCOMES Trial, where they constructed the PRS using disease GWAS summary statistics from a genome-wide meta-analysis of CAD risk. In addition, Marston et al . used a PRS constructed from 27 SNPs derived from a recent large-scale disease GWAS study (a meta-analysis GWAS for CAD) to successfully predict benefit from Evolocumab therapy in patients with atherosclerotic disease. On the other hand, instead of building PRSs with disease GWAS data, a few published studies choose to construct PRSs from drug-related data for safety or efficacy drug response predictions. For example, Lanfear et al . built an efficacy PGx PRS from a PGx genome-wide analysis of [12pt]{minimal} $$ -blocker [12pt]{minimal} $$ SNP interaction, and successfully predicted all-cause mortality ( [12pt]{minimal} $$ -blocker benefit) in the European population. These examples highlight the potential benefits of developing PRSs for (safety or efficacy) drug response predictions in PGx studies, as well as their potential utilities in clinic. PRS applications in PGx studies have been reviewed by several papers with different focuses. Specifically, Johnson et al . and Cross et al . reviewed 51 and 63 PRS application papers, respectively, focusing on the PRSs built with variants from disease GWAS. They included an overview of the PRS applications and successful findings in different disease areas, challenges and reporting guidelines. Siemens et al . reviewed 89 papers, with a focus on the PRSs derived from pharmacogenetic variants associated with drug responses in either candidate gene PGx studies or PGx GWAS. In addition, the authors also reviewed the strategies of PRS performance evaluation and validation. Similarly, Kumuthini et al . focused on the validation of the PRSs in PGx, and the potential impact on their translation into clinical utility. Regarding the PRS methods developed in the disease genetics field, there are several papers reviewing the PRS methodologies using disease GWAS summary statistics, focusing on the genotype (main or prognostic) effects only . In contrast, PRSs built for PGx studies from randomized clinical trials (RCTs) need to handle both prognostic and predictive effects. This is because in PGx studies, a patient’s clinical outcomes are influenced by both prognostic and predictive factors. A prognostic biomarker provides information about an endpoint (i.e. clinical outcome) irrespective of the treatment type. However, a predictive biomarker is associated with an endpoint related to the treatment type (i.e. predicting treatment benefits). In the PRS context, the prognostic effect measures the main genotype (G) association strength with clinical outcome before any treatment intervention. On the other hand, the predictive effect measures the G x T interaction association strength with clinical outcome after treatment. While current practice involves in directly applying disease GWAS-based PRSs to PGx studies, Zhai et al . pointed out that this approach might not fully recover the heritability of drug response since it relied on a stringent assumption which was barely satisfied in real PGx data. The authors further proposed a series of PRS-PGx methods using PGx GWAS summary statistics instead. PRS modeling in PGx GWAS shares the same challenges as those in disease GWAS. These challenges include trans-ethnic bias across populations (i.e. GWAS sample size for non-European populations is relatively lower and a PRS derived from one population is expected to perform less well in other populations due to differences in allele frequencies, linkage disequilibrium (LD) patterns and effect sizes across different populations) and architectural diversity across multiple correlated traits . Some other challenges include the lack of clear guidelines on clinical interpretation of PGx polygenic models, PGx studies usually requiring more well-defined drug response clinical endpoints, and the lack of PRS reporting guidelines. Despite previous discussions and summaries of these challenges (for example, in the above review papers), possible strategies and solutions to tackling these challenges remain unclear. Therefore, it is critical to take one step further by proposing new PRS application strategies and methods. To gain insights into this landscape, we first conduct a systematic review of the current progress in terms of both the PRS applications and statistical methods development in PGx GWAS. We identify 90 papers published by 11 March 2022 for our PRS PGx applications review, and summarize 23 PRS methods used in these papers in our PRS methods review. We further analyze these systematic review results to provide insights into the status, trends and challenges of PRS applications and method developments in PGx GWAS and discuss potential areas for improvement. Compared with PRS modeling in disease GWAS, PRS analysis in PGx GWAS with drug response endpoints (efficacy or safety) is more challenging and faces additionally unique challenges. They include the lack of knowledge about whether to use PGx GWAS, disease GWAS or both GWAS/variants in the base cohort (BC) for PRS construction, the small sample sizes in PGx GWAS from RCTs (compared to large disease cohorts) which often result in low power for prediction or association analysis, and the more complex statistical modeling for handling both prognostic and predictive effects simultaneously. There is a trade-off between choosing PGx and disease GWAS (summary statistics) data in the BC to build PRSs. Choosing disease GWAS data, which typically has a large sample size, usually provides large power for prognostic effect prediction, but low power for predictive effect (i.e. genotype-by-treatment interaction) prediction. In contrast, choosing PGx GWAS data, which typically has a relatively small sample size, usually provides lower power for prognostic effect prediction, but likely larger power for predictive effect prediction since PGx variants used for PRS construction are directly drug response related. In this paper, rather than choosing either PGx or disease GWAS alone in the BC to build a PRS, we propose a new strategy to leverage both disease and PGx GWAS summary statistics. This approach benefits from the large sample size in disease genetic studies and the additionally strong predictive effects in PGx studies. In addition, similar to the cross-population disease GWAS PRS, leveraging and properly modeling trans-ethnic populations can potentially reduce the prediction bias for cross-population prediction in PGx GWAS. To overcome the challenge of Eurocentric or trans-ethnic bias in PGx GWAS, we extend the PRS-PGx-Bayes method to conduct the cross-population PRS modeling with shared shrinkage parameters among multiple populations. The new method simultaneously handles both prognostic and predictive effects. We further perform extensive simulation studies to compare our novel PRS methods using both internal validation with a cross-validation strategy and external validation with an independently simulated validation dataset, as suggested by Siemens et al . and Kumuthini et al . . Furthermore, integrating multiple genetically correlated traits can potentially increase the effective sample size in PGx GWAS, and thus improve the power of PRS association test, PRS prediction accuracy and PRS-based patient stratification. To investigate the impact of complex genetic architectures among multiple traits, Zhai et al . systematically reviewed and compared multiple types of multi-trait PRS methods including regression-based methods (mtPRS-ML and mtPRS-MR), meta−/multi-GWAS-based methods (mtPRS-minP and mtPRS-GSEM), PCA-based method (mtPRS-PCA), and omnibus mtPRS method (mtPRS-O) under various genetic architectures. In this paper, we briefly summarize the current status of applying mtPRS methods to PGx GWAS and main observations from our previous mtPRS method research work . In summary, in this paper, we aim to provide an in-depth overview of the current status of both PRS applications in PGx GWAS and the PRS methods used in those applications, identify the main gaps and challenges, and then propose the possible solutions including two new PRS application strategies and methods for filling the gaps and tackling the challenges. The overall workflow of the paper is summarized in . Our review aimed to summarize the current applications of PRSs in PGx GWAS as well as the PRS analysis methods used in the applications. The detailed paper searching workflow is provided in . The study screening was conducted by two independent reviewers (Z.S. and S.J.) to minimize selection bias. In the ‘Identification’ step, we used the same search terms as used by Johnson et al . , which mainly included two categories: polygenic score related terms and drug response related terms. Our paper search was based on Medline and EMBASE databases up to 11 March 2022, separately. Non-English publications were excluded due to resource constraints. In the ‘Pre-screening’ step, we excluded publications by title and abstract. Publications with abstract only, reviews, letters, notes, etc. were excluded as they did not contain enough details about PRSs [e.g. base and target cohorts (TC)] for our review. The remaining articles that were not drug related and did not use qualified PRSs (e.g. not genome-wide, not genetic variation based or unweighted PRSs) were further excluded. In the ‘Combination’ step, we combined the papers from Medline and EMBASE together and removed duplicate records. In the ‘Screening’ step, records were further excluded based on a full-text screen due to either unqualified publication types or unqualified approaches to build PRSs. Specifically, papers that were not drug related, not drug response PGx study related, not constructing qualified PRSs or only working on TC via cross-validation, were excluded. The whole workflow followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines to ensure completeness of the review. In addition to summarizing the PRS applications in PGx studies, we also summarized the PRS analysis methods used in the 90 identified PGx PRS application papers as well as some additional PRS methods reviewed in other methodology review papers . Regarding the challenges of applying PRSs to PGx GWAS, in this paper we mainly focus on three key challenges: (i) the lack of knowledge about whether to choose PGx, disease or a combination of both PGx and disease GWAS summary statistics in the BC for PRS construction; (ii) the Eurocentric or trans-ethnic bias in cross-population PRS prediction and (iii) the small sample size, low power and more complex PRS modeling in PGx GWAS. We propose two novel PRS strategies and methods to overcome the challenges, which provide potential solutions for better future PRS applications in the field. Overview of main challenges of PRS applications in PGx GWAS Our initial search identified 834 papers from Medline and 2487 papers from EMBASE between 2013 and 2022. After pre-screening and combining the two databases, 127 papers were left for full-text screening. Ultimately, 90 papers were included in our systematic review . Despite a steady increase in the number of PRS application articles between 2013 and 2022 , the PRS analysis in PGx GWAS still faces multiple challenges, which have been extensively discussed in the published overview papers and briefly summarized in the previous section. In this paper, we focus on the three aforementioned key challenges in greater details. Challenge 1: Lack of knowledge about whether to choose PGx, disease or both GWAS summary statistics in the base cohort for PRS construction The use of PRSs in PGx requires a BC and a TC. In the BC, summary statistics are generated to obtain risk variant effect sizes, standard errors and p-values to inform PRS calculations; in the TC, the PRS is developed and tested. To date, many researchers leverage GWAS derived from large cohorts of related disease and/or complex trait (i.e. non-disease) phenotypes in the development of polygenic models to predict drug outcome. For example, indicate that 74 out of 90 identified articles (~82%) published by 2022 use BC GWAS from disease studies. These include disease phenotypes related to the drug efficacy , complex trait phenotypes related to the drug efficacy , disease phenotypes related to the adverse drug reaction , and complex trait phenotypes related to the adverse drug reaction . However, our previous research showed that such disease PRS approach cannot recover the full heritability of drug response unless an extremely stringent assumption that every causal variant has an interaction effect proportionate to its main effect is true . More details are provided in the following section. Therefore, 16 out of 90 papers construct the PRSs for drug response prediction using PGx GWAS summary statistics directly instead of using disease GWAS summary statistics since the PGx variants are directly associated with drug response, which may provide larger power (especially in predicting drug response through the predictive effect of a PRS) . However, using PGx GWAS as the BC for PRS modeling presents unique challenges due to its typically small sample size, and the difficulty in identifying two cohorts with uniformly treated patients. These factors can lead to increased PRS modeling uncertainty and result in low PRS prediction power. To date, it remains unclear whether it is optimal to use disease GWAS summary statistics that emphasize disease variants with only prognostic effects, PGx GWAS summary statistics that emphasize PGx variants with both prognostic and predictive effects, or a combination of both in the BC for PRS construction and analysis in PGx GWAS. Challenge 2: Eurocentric or trans-ethnic bias in cross-population PRS prediction Recent studies have found that PRSs exhibit reduced cross-population prediction accuracy, particularly in non-European populations . Building PRSs with GWAS data from the same non-European population may yield limited prediction accuracy due to the typically small sample size of non-European GWAS compared with European GWAS. This applies to PRS applications in both disease GWAS and PGx GWAS. Conversely, constructing PRSs using European GWAS with larger sample size may offer limited improvement in prediction accuracy due to Eurocentric or trans-ethnic bias. This bias arises from differences in allele frequencies, LD patterns and effect sizes across populations . With the increasing efforts to diversify genomic study samples, non-European genomic resources have been expanded, leading to a rapid growth in PGx studies leveraging multiple populations , from 1 study in 2016 to 22 in 2021. For instance, Cearns et al . constructed a PRS using meta summary statistics calculated from seven cohorts with European, American, Asian and African populations . To mitigate the impact of trans-ethnic bias, there is a pressing need to develop PRS methods that can handle such trans-ethnic data. Recent publications have introduced new methods for trans-ethnic PRS analysis in disease genetics , but it is unclear whether these methods can be directly applied to PGx studies. To date, to our knowledge, no PGx PRS methods have been developed to address trans-ethnic drug response prediction under PGx settings. Challenge 3: Small sample size, low power and more complex PRS modeling in PGx GWAS Subjects in the PGx GWAS are usually from RCTs, where the sample sizes are much smaller than those from large disease cohorts. Such small sample size typically results in low power for predicting drug responses with prognostic PRS components. There are many strategies to increase the power of PRS analysis in PGx GWAS. In this sense, integrating multiple traits during PGx PRS construction provides a natural way to increase power. To date, most PRSs are constructed using a single trait (i.e. using univariate disease or PGx GWAS summary statistics). However, most disease and PGx GWAS data are multivariate in nature with multiple correlated traits or drug responses. Intuitively, leveraging information from multiple correlated traits can potentially capture more genetic variance, thus boosting the power of PRS analysis. In fact, the number of published papers that explore multiple traits increased rapidly between 2019 and 2021 from 4 to 25 . Specifically, 20 out of 25 papers analyze multiple PRSs separately. These PRSs are constructed from multiple diseases and/or complex traits that are potentially related to the drug response phenotype in the TC. For example, in Fanelli et al ., the authors investigated the possible association of PRSs for bipolar disorder, major depressive disorder, neuroticism and schizophrenia with antidepressant non-response or non-remission in patients with major depressive disorder . The remaining papers analyzed multiple PRSs jointly using either machine learning (ML) based regression or multivariate regression. For example, Taylor et al . used 10 PRSs for depression and genetically correlated traits as predictors in an elastic net model to predict response in tertiary care patients with resistant depression . There are additional challenges to be addressed while constructing a multi-trait PRS in PGx GWAS compared with disease GWAS. First, a PGx multi-trait PRS requires more complex statistical modeling to handle prognostic and predictive effects simultaneously. In literature, very few papers explore this strategy. For example, Lanfear et al . presented a successful PRS application example in PGx GWAS study, which built a PRS using 44 SNPs with predictive (or treatment-by-SNP interaction) effects only. Second, the genetic architecture is more complicated when considering different effect correlation relationships (e.g. magnitudes and directions) between multiple traits in the BC and the phenotype in the TC. It remains a question whether the current multi-trait PRS (mtPRS) approaches built for disease multi-trait PRS analysis are still robust when applied to PGx GWAS or not. Overview of the PRS methods applied in PGx GWAS A large variety of methods are available for PRS construction and analysis. They differ from each other in terms of which variants and weights are used for constructing PRS and conducting PRS association analysis and/or prediction. Existing PRS methods generally fall into one of four categories: (i) clumping and thresholding (C + T) approaches, which shrink effect sizes of non-significant SNPs to zero according to their p-values, and account for LD by clumping variants at a given LD; (ii) methods that select variants jointly using penalized regression in the framework of ML, where the number of selected causal variants is controlled by the penalty parameter, and the LD matrix is intrinsic to the algorithm ; (iii) methods that account for LD through a linear mixed effects model, and estimate effect sizes as best linear unbiased predictions (BLUP) and (iv) Bayesian approaches that explicitly model causal effects and LD to infer the posterior distribution of causal effect sizes, where the shrinkage is controlled by the prior distributions, and the LD matrix is integral to the algorithm . Compared with simple C + T method, genome-wide model fitting with penalized or Bayesian regression generally better accounts for LD and has more efficient bias-variance trade-off, thus achieving better performance. In this paper, we summarized 25 PRS methods (including two novel methods/strategies we propose to tackle the challenges) using a three-level hierarchical structure . On the first level, we categorized the methods based on the type of BC (i.e. using disease GWAS only, using PGx GWAS only or leveraging both disease and PGx GWAS summary statistics). On the second level, the methods are further categorized based on trait and ancestry information (i.e. single-trait single-ancestry; single-trait trans-ethnic; and multi-trait single-ancestry approaches). We do not consider the multi-trait trans-ethnic approaches since no such methods are currently available. On the third level, methods are divided into the four categories we mentioned before: C + T, ML, BLUP and Bayesian regression. More details about the 23 existing methods including their software sources are summarized in . presents an overview of the methods used in the 90 articles identified in this study. As of March 11, 2022, 91.1% (82/90) of the papers employed the C + T method for PRS construction, which is simple and user-friendly but may discard informative SNPs, potentially limiting the prediction power. A small proportion of papers (8/90 = 8.9%) utilized Bayesian methods, including PRS-CS, LDpred and LDpred2. Only two papers (2/90 = 2.2%) used ML based approaches, and no study investigated BLUP methods. Despite the prevalence of the C + T method in these studies, the use of more complex Bayesian and ML algorithms has been increasing since 2014, accounting for 39.1% and 34.8% of studies from 2022, respectively . This trend reflects a growing interest in employing more advanced PRS modeling methods to improve performance in both disease and PGx GWAS. Our initial search identified 834 papers from Medline and 2487 papers from EMBASE between 2013 and 2022. After pre-screening and combining the two databases, 127 papers were left for full-text screening. Ultimately, 90 papers were included in our systematic review . Despite a steady increase in the number of PRS application articles between 2013 and 2022 , the PRS analysis in PGx GWAS still faces multiple challenges, which have been extensively discussed in the published overview papers and briefly summarized in the previous section. In this paper, we focus on the three aforementioned key challenges in greater details. The use of PRSs in PGx requires a BC and a TC. In the BC, summary statistics are generated to obtain risk variant effect sizes, standard errors and p-values to inform PRS calculations; in the TC, the PRS is developed and tested. To date, many researchers leverage GWAS derived from large cohorts of related disease and/or complex trait (i.e. non-disease) phenotypes in the development of polygenic models to predict drug outcome. For example, indicate that 74 out of 90 identified articles (~82%) published by 2022 use BC GWAS from disease studies. These include disease phenotypes related to the drug efficacy , complex trait phenotypes related to the drug efficacy , disease phenotypes related to the adverse drug reaction , and complex trait phenotypes related to the adverse drug reaction . However, our previous research showed that such disease PRS approach cannot recover the full heritability of drug response unless an extremely stringent assumption that every causal variant has an interaction effect proportionate to its main effect is true . More details are provided in the following section. Therefore, 16 out of 90 papers construct the PRSs for drug response prediction using PGx GWAS summary statistics directly instead of using disease GWAS summary statistics since the PGx variants are directly associated with drug response, which may provide larger power (especially in predicting drug response through the predictive effect of a PRS) . However, using PGx GWAS as the BC for PRS modeling presents unique challenges due to its typically small sample size, and the difficulty in identifying two cohorts with uniformly treated patients. These factors can lead to increased PRS modeling uncertainty and result in low PRS prediction power. To date, it remains unclear whether it is optimal to use disease GWAS summary statistics that emphasize disease variants with only prognostic effects, PGx GWAS summary statistics that emphasize PGx variants with both prognostic and predictive effects, or a combination of both in the BC for PRS construction and analysis in PGx GWAS. Recent studies have found that PRSs exhibit reduced cross-population prediction accuracy, particularly in non-European populations . Building PRSs with GWAS data from the same non-European population may yield limited prediction accuracy due to the typically small sample size of non-European GWAS compared with European GWAS. This applies to PRS applications in both disease GWAS and PGx GWAS. Conversely, constructing PRSs using European GWAS with larger sample size may offer limited improvement in prediction accuracy due to Eurocentric or trans-ethnic bias. This bias arises from differences in allele frequencies, LD patterns and effect sizes across populations . With the increasing efforts to diversify genomic study samples, non-European genomic resources have been expanded, leading to a rapid growth in PGx studies leveraging multiple populations , from 1 study in 2016 to 22 in 2021. For instance, Cearns et al . constructed a PRS using meta summary statistics calculated from seven cohorts with European, American, Asian and African populations . To mitigate the impact of trans-ethnic bias, there is a pressing need to develop PRS methods that can handle such trans-ethnic data. Recent publications have introduced new methods for trans-ethnic PRS analysis in disease genetics , but it is unclear whether these methods can be directly applied to PGx studies. To date, to our knowledge, no PGx PRS methods have been developed to address trans-ethnic drug response prediction under PGx settings. Subjects in the PGx GWAS are usually from RCTs, where the sample sizes are much smaller than those from large disease cohorts. Such small sample size typically results in low power for predicting drug responses with prognostic PRS components. There are many strategies to increase the power of PRS analysis in PGx GWAS. In this sense, integrating multiple traits during PGx PRS construction provides a natural way to increase power. To date, most PRSs are constructed using a single trait (i.e. using univariate disease or PGx GWAS summary statistics). However, most disease and PGx GWAS data are multivariate in nature with multiple correlated traits or drug responses. Intuitively, leveraging information from multiple correlated traits can potentially capture more genetic variance, thus boosting the power of PRS analysis. In fact, the number of published papers that explore multiple traits increased rapidly between 2019 and 2021 from 4 to 25 . Specifically, 20 out of 25 papers analyze multiple PRSs separately. These PRSs are constructed from multiple diseases and/or complex traits that are potentially related to the drug response phenotype in the TC. For example, in Fanelli et al ., the authors investigated the possible association of PRSs for bipolar disorder, major depressive disorder, neuroticism and schizophrenia with antidepressant non-response or non-remission in patients with major depressive disorder . The remaining papers analyzed multiple PRSs jointly using either machine learning (ML) based regression or multivariate regression. For example, Taylor et al . used 10 PRSs for depression and genetically correlated traits as predictors in an elastic net model to predict response in tertiary care patients with resistant depression . There are additional challenges to be addressed while constructing a multi-trait PRS in PGx GWAS compared with disease GWAS. First, a PGx multi-trait PRS requires more complex statistical modeling to handle prognostic and predictive effects simultaneously. In literature, very few papers explore this strategy. For example, Lanfear et al . presented a successful PRS application example in PGx GWAS study, which built a PRS using 44 SNPs with predictive (or treatment-by-SNP interaction) effects only. Second, the genetic architecture is more complicated when considering different effect correlation relationships (e.g. magnitudes and directions) between multiple traits in the BC and the phenotype in the TC. It remains a question whether the current multi-trait PRS (mtPRS) approaches built for disease multi-trait PRS analysis are still robust when applied to PGx GWAS or not. A large variety of methods are available for PRS construction and analysis. They differ from each other in terms of which variants and weights are used for constructing PRS and conducting PRS association analysis and/or prediction. Existing PRS methods generally fall into one of four categories: (i) clumping and thresholding (C + T) approaches, which shrink effect sizes of non-significant SNPs to zero according to their p-values, and account for LD by clumping variants at a given LD; (ii) methods that select variants jointly using penalized regression in the framework of ML, where the number of selected causal variants is controlled by the penalty parameter, and the LD matrix is intrinsic to the algorithm ; (iii) methods that account for LD through a linear mixed effects model, and estimate effect sizes as best linear unbiased predictions (BLUP) and (iv) Bayesian approaches that explicitly model causal effects and LD to infer the posterior distribution of causal effect sizes, where the shrinkage is controlled by the prior distributions, and the LD matrix is integral to the algorithm . Compared with simple C + T method, genome-wide model fitting with penalized or Bayesian regression generally better accounts for LD and has more efficient bias-variance trade-off, thus achieving better performance. In this paper, we summarized 25 PRS methods (including two novel methods/strategies we propose to tackle the challenges) using a three-level hierarchical structure . On the first level, we categorized the methods based on the type of BC (i.e. using disease GWAS only, using PGx GWAS only or leveraging both disease and PGx GWAS summary statistics). On the second level, the methods are further categorized based on trait and ancestry information (i.e. single-trait single-ancestry; single-trait trans-ethnic; and multi-trait single-ancestry approaches). We do not consider the multi-trait trans-ethnic approaches since no such methods are currently available. On the third level, methods are divided into the four categories we mentioned before: C + T, ML, BLUP and Bayesian regression. More details about the 23 existing methods including their software sources are summarized in . presents an overview of the methods used in the 90 articles identified in this study. As of March 11, 2022, 91.1% (82/90) of the papers employed the C + T method for PRS construction, which is simple and user-friendly but may discard informative SNPs, potentially limiting the prediction power. A small proportion of papers (8/90 = 8.9%) utilized Bayesian methods, including PRS-CS, LDpred and LDpred2. Only two papers (2/90 = 2.2%) used ML based approaches, and no study investigated BLUP methods. Despite the prevalence of the C + T method in these studies, the use of more complex Bayesian and ML algorithms has been increasing since 2014, accounting for 39.1% and 34.8% of studies from 2022, respectively . This trend reflects a growing interest in employing more advanced PRS modeling methods to improve performance in both disease and PGx GWAS. Using disease GWAS in the base cohort may lead to poor drug response prediction performance PGx PRS applications can be categorized into three types in terms of the BC GWAS type: disease GWAS, the PGx GWAS with treatment arm only, and the full PGx GWAS with two arms (treatment and control); and two types in terms of the TC study type: the PGx study with treatment arm only, and the PGx study with two arms. Using disease GWAS in the BC and then applying it to PGx studies represents the largest percentage (74/90 = 82%) of 90 identified papers, possibly due to the wide publicly available resources of disease GWAS summary statistics, and the difficulty in finding PGx GWAS data with the same or similar drug response endpoints. However, as we noted before, such disease PRS approach can barely recover the full heritability of drug response, which is consistent with what we report in . Specifically, the use of disease GWAS in the BC results in the largest failure rate (defined as the proportion of the papers with no significant association found between the PRS and the drug responses): 20% (15/74); when the BC GWAS are from PGx studies, the failure rate decreases to 6% (2/16). This is not surprising since most papers using disease GWAS-based PRSs in PGx applications rely on the assumption that if a variant is a significant signal for some specific disease, then that variant is also causal for the response of the drug targeting that disease. However, the underlying genetic correlation between complex trait/disease risk and drug response needs to be carefully investigated and may vary case by case. Moreover, more studies now tend to focus on PGx studies with two arms and perform the interaction test of differential treatment effect between the high and low genetic risk subgroups when stratified by the PRS. Many such successful findings have been reported . Disease PRS is not able to recover the full heritability of drug response in theory We have previously proved that, in theory, it is difficult for a disease PRS to recover the full heritability of a drug response in PGx GWAS . Here, we briefly summarize the theoretical derivation and the main conclusions. Consider a high-dimensional regression model where the drug response, after adjusting for the covariates, is determined by three components: treatment, genotype and genotype-by-treatment interaction. Furthermore, to capture the correlation between prognostic and predictive effects, we assume the two effects follow a multivariate-normal distribution. Under these assumptions, we derive the squared correlation coefficient between a disease PRS ( [12pt]{minimal} ${}_{}={}_{=1}^{}{}_{}{}_{}$ ) and the drug response ( [12pt]{minimal} $$ ) for the treated subjects, denoted by [12pt]{minimal} $}^2({}_{},)$ . We proved that by the Cauchy-Schwarz inequality, [12pt]{minimal} $}^2({}_{},)}^2$ , where [12pt]{minimal} ${}^2$ denotes the underlying heritability of the drug response studied. It directly showed that a disease PRS cannot recover the full genetic variability of a drug response unless the equality holds. We further proved that the equality holds if and only if the interaction effect is proportional to the main effect for every causal variant, which is a very stringent assumption under PGx settings. In , the authors used a specific example with a real PGx GWAS data from the IMPROVE-IT RCT to calculate [12pt]{minimal} $}^2({}_{},)={}^2(1-0.54)$ , which demonstrated that any PRSs developed from disease GWAS explained at most 46% variability of the LDL-C drug response. This real data example highlights the importance of switching to using PGx GWAS/variants (like what does) or the combination of PGx and disease GWAS/variants as the BC in the PGx PRS modeling and analysis. Leveraging both PGx and disease GWAS in the base cohort for improving drug response prediction It is worth noting that there are certain limitations when calculating PRSs using PGx GWAS summary statistics only. First, it can be challenging to find two independent PGx GWAS data with same or similar traits. Second, the sample size in PGx studies is usually small, which may lead to limited prediction power. In fact, Zhai et al . showed in their simulation studies that PGx PRS methods did not necessarily outperform disease PRS approaches in the control arm. In other words, if we only focus on the control arm of a PGx study, a disease PRS may still be useful compared to a PGx PRS due to its large sample size, which provides greater power in PRS prediction in terms of the prognostic effect component. However, in real scenario we may be more interested in the treatment arm or both treatment and control arms. Therefore, a compromise and a direct alterative solution is to use both PGx GWAS data and disease GWAS summary statistics for PRS construction in the BC if both are available. In this section, we aim to compare different combinations between two PRS construction strategies of using summary statistics (either disease or PGx GWAS only or both disease and PGx GWAS) and four PRS methods (C + T, Lassosum, PRS-CS and PRS-PGx-Bayes). Eight approaches will be compared, which are listed in . In this table, we mainly focus on how to incorporate strategies of leveraging summary statistics into PRS methods. And the description details of the methods themselves can be found in . Simulation studies for evaluating the new strategy by leveraging both PGx and disease GWAS in the base cohort for PRS construction in PGx GWAS We performed extensive simulation studies to compare the performance of methods that use disease GWAS summary statistics only, PGx GWAS summary statistics only, or both. We simulated genotype data using R package sim1000G v1.40 with different sample sizes. To simulate prognostic [12pt]{minimal} ${}_{}$ and predictive [12pt]{minimal} ${}_{}$ effect sizes, we used the same spike-and-slab distribution as described in Zhai et al . . The two effects were either correlated (i.e. a causal variant had both non-zero prognostic and non-zero predictive effects) or fully separated (i.e. a causal variant had either non-zero prognostic or non-zero predictive effect). The prognostic effect is either on the same scale as, dominated by, or dominating the predictive effect. Details of the data generation process are provided in Supplementary Method A. shows the scenario where prognostic and predictive effects are correlated, and the two effect sizes are on the same scale. Results were assessed via internal 5-fold cross-validation in the TC. Specifically, the whole TC was randomly split into five folds. The tuning parameters were determined using four folds, and the PRS was constructed and recorded in the remaining fold. In , we initially employed disease PRS methods (C + T, Lassosum, PRS-CS) using disease GWAS summary statistics only as the base. Subsequently, we incorporated additional PGx data to check the potential improvement of disease PRS methods. Similarly, we utilized PGx PRS method (PRS-PGx-Bayes) using PGx GWAS summary statistics only as the base, and then added additional disease genetics data to assess the improvement. All disease PRS and PGx PRS methods showed substantial improvement in the drug response prediction and patient stratification after leveraging additional GWAS information compared to their counterpart traditional approach using either disease or PGx GWAS alone. For example, PRS-PGx-Bayes (Disease + PGx) generally outperformed PRS-PGx-Bayes (PGx), especially under the higher polygenicity scenario when the sample size of PGx GWAS data was small. In addition, when the sample size of PGx GWAS data was large enough (for example, n = 10 000), PRS-PGx-Bayes (PGx) was superior to PRS-PGx-Bayes (Disease + PGx) as shown in both with internal cross-validation and with external validation (i.e. the optimal tuning parameter was selected using an independently simulated validation PGx dataset with sample size 1000 as shown in Supplementary Method A.5). One possible explanation is that it is due to the difference in the prognostic effect estimate between disease and PGx GWAS. The same pattern was also observed when the proportion of causal SNPs increased from 0.001 to 0.1, although all methods generally had lower performance. Sensitivity analyses results are provided in – . Specifically, in , we repeated our comparisons using external validation, where the tuning parameters were selected with an independent validation dataset. The same patterns held compared to the case where internal validation was used. shows the scenario where the prognostic and predictive effect sizes were on different scales. When the prognostic effect dominated the predictive effect, it is not surprising that incorporating disease genetics data into the PGx PRS approach had a much larger improvement than incorporating PGx data into disease PRS methods. When the predictive effect dominated the prognostic effect, PRS-PGx-Bayes (Disease + PGx) consistently achieved the highest prediction accuracy. checks the situation when the prognostic and predictive effects were fully separated. In that setting, the disease PRS methods using disease GWAS summary statistics alone had the lowest [12pt]{minimal} ${}^2$ since they could hardly capture any variability explained by the interaction. Thus, incorporating PGx GWAS information greatly improved the performance of disease PRS methods. PGx PRS applications can be categorized into three types in terms of the BC GWAS type: disease GWAS, the PGx GWAS with treatment arm only, and the full PGx GWAS with two arms (treatment and control); and two types in terms of the TC study type: the PGx study with treatment arm only, and the PGx study with two arms. Using disease GWAS in the BC and then applying it to PGx studies represents the largest percentage (74/90 = 82%) of 90 identified papers, possibly due to the wide publicly available resources of disease GWAS summary statistics, and the difficulty in finding PGx GWAS data with the same or similar drug response endpoints. However, as we noted before, such disease PRS approach can barely recover the full heritability of drug response, which is consistent with what we report in . Specifically, the use of disease GWAS in the BC results in the largest failure rate (defined as the proportion of the papers with no significant association found between the PRS and the drug responses): 20% (15/74); when the BC GWAS are from PGx studies, the failure rate decreases to 6% (2/16). This is not surprising since most papers using disease GWAS-based PRSs in PGx applications rely on the assumption that if a variant is a significant signal for some specific disease, then that variant is also causal for the response of the drug targeting that disease. However, the underlying genetic correlation between complex trait/disease risk and drug response needs to be carefully investigated and may vary case by case. Moreover, more studies now tend to focus on PGx studies with two arms and perform the interaction test of differential treatment effect between the high and low genetic risk subgroups when stratified by the PRS. Many such successful findings have been reported . We have previously proved that, in theory, it is difficult for a disease PRS to recover the full heritability of a drug response in PGx GWAS . Here, we briefly summarize the theoretical derivation and the main conclusions. Consider a high-dimensional regression model where the drug response, after adjusting for the covariates, is determined by three components: treatment, genotype and genotype-by-treatment interaction. Furthermore, to capture the correlation between prognostic and predictive effects, we assume the two effects follow a multivariate-normal distribution. Under these assumptions, we derive the squared correlation coefficient between a disease PRS ( [12pt]{minimal} ${}_{}={}_{=1}^{}{}_{}{}_{}$ ) and the drug response ( [12pt]{minimal} $$ ) for the treated subjects, denoted by [12pt]{minimal} $}^2({}_{},)$ . We proved that by the Cauchy-Schwarz inequality, [12pt]{minimal} $}^2({}_{},)}^2$ , where [12pt]{minimal} ${}^2$ denotes the underlying heritability of the drug response studied. It directly showed that a disease PRS cannot recover the full genetic variability of a drug response unless the equality holds. We further proved that the equality holds if and only if the interaction effect is proportional to the main effect for every causal variant, which is a very stringent assumption under PGx settings. In , the authors used a specific example with a real PGx GWAS data from the IMPROVE-IT RCT to calculate [12pt]{minimal} $}^2({}_{},)={}^2(1-0.54)$ , which demonstrated that any PRSs developed from disease GWAS explained at most 46% variability of the LDL-C drug response. This real data example highlights the importance of switching to using PGx GWAS/variants (like what does) or the combination of PGx and disease GWAS/variants as the BC in the PGx PRS modeling and analysis. It is worth noting that there are certain limitations when calculating PRSs using PGx GWAS summary statistics only. First, it can be challenging to find two independent PGx GWAS data with same or similar traits. Second, the sample size in PGx studies is usually small, which may lead to limited prediction power. In fact, Zhai et al . showed in their simulation studies that PGx PRS methods did not necessarily outperform disease PRS approaches in the control arm. In other words, if we only focus on the control arm of a PGx study, a disease PRS may still be useful compared to a PGx PRS due to its large sample size, which provides greater power in PRS prediction in terms of the prognostic effect component. However, in real scenario we may be more interested in the treatment arm or both treatment and control arms. Therefore, a compromise and a direct alterative solution is to use both PGx GWAS data and disease GWAS summary statistics for PRS construction in the BC if both are available. In this section, we aim to compare different combinations between two PRS construction strategies of using summary statistics (either disease or PGx GWAS only or both disease and PGx GWAS) and four PRS methods (C + T, Lassosum, PRS-CS and PRS-PGx-Bayes). Eight approaches will be compared, which are listed in . In this table, we mainly focus on how to incorporate strategies of leveraging summary statistics into PRS methods. And the description details of the methods themselves can be found in . We performed extensive simulation studies to compare the performance of methods that use disease GWAS summary statistics only, PGx GWAS summary statistics only, or both. We simulated genotype data using R package sim1000G v1.40 with different sample sizes. To simulate prognostic [12pt]{minimal} ${}_{}$ and predictive [12pt]{minimal} ${}_{}$ effect sizes, we used the same spike-and-slab distribution as described in Zhai et al . . The two effects were either correlated (i.e. a causal variant had both non-zero prognostic and non-zero predictive effects) or fully separated (i.e. a causal variant had either non-zero prognostic or non-zero predictive effect). The prognostic effect is either on the same scale as, dominated by, or dominating the predictive effect. Details of the data generation process are provided in Supplementary Method A. shows the scenario where prognostic and predictive effects are correlated, and the two effect sizes are on the same scale. Results were assessed via internal 5-fold cross-validation in the TC. Specifically, the whole TC was randomly split into five folds. The tuning parameters were determined using four folds, and the PRS was constructed and recorded in the remaining fold. In , we initially employed disease PRS methods (C + T, Lassosum, PRS-CS) using disease GWAS summary statistics only as the base. Subsequently, we incorporated additional PGx data to check the potential improvement of disease PRS methods. Similarly, we utilized PGx PRS method (PRS-PGx-Bayes) using PGx GWAS summary statistics only as the base, and then added additional disease genetics data to assess the improvement. All disease PRS and PGx PRS methods showed substantial improvement in the drug response prediction and patient stratification after leveraging additional GWAS information compared to their counterpart traditional approach using either disease or PGx GWAS alone. For example, PRS-PGx-Bayes (Disease + PGx) generally outperformed PRS-PGx-Bayes (PGx), especially under the higher polygenicity scenario when the sample size of PGx GWAS data was small. In addition, when the sample size of PGx GWAS data was large enough (for example, n = 10 000), PRS-PGx-Bayes (PGx) was superior to PRS-PGx-Bayes (Disease + PGx) as shown in both with internal cross-validation and with external validation (i.e. the optimal tuning parameter was selected using an independently simulated validation PGx dataset with sample size 1000 as shown in Supplementary Method A.5). One possible explanation is that it is due to the difference in the prognostic effect estimate between disease and PGx GWAS. The same pattern was also observed when the proportion of causal SNPs increased from 0.001 to 0.1, although all methods generally had lower performance. Sensitivity analyses results are provided in – . Specifically, in , we repeated our comparisons using external validation, where the tuning parameters were selected with an independent validation dataset. The same patterns held compared to the case where internal validation was used. shows the scenario where the prognostic and predictive effect sizes were on different scales. When the prognostic effect dominated the predictive effect, it is not surprising that incorporating disease genetics data into the PGx PRS approach had a much larger improvement than incorporating PGx data into disease PRS methods. When the predictive effect dominated the prognostic effect, PRS-PGx-Bayes (Disease + PGx) consistently achieved the highest prediction accuracy. checks the situation when the prognostic and predictive effects were fully separated. In that setting, the disease PRS methods using disease GWAS summary statistics alone had the lowest [12pt]{minimal} ${}^2$ since they could hardly capture any variability explained by the interaction. Thus, incorporating PGx GWAS information greatly improved the performance of disease PRS methods. Overview of the cross-population PRS applications in PGx GWAS PGx PRS applications could be classified into three categories based on the BC ancestry: European, non-European (e.g. Asian, African) and Multiple (which is the mixture of, in most cases, European and TC non-European population); and three categories based on the TC ancestry: European, non-European and Multiple. No published articles were found to build PRSs with non-European ancestry alone in the BC unless the TC population is also non-European (the percentage is 8/90 = 9% in ). On the other hand, applying a PRS built from European ancestry to the European population in the TC (which is referred to as ‘European + European’) remains the largest proportion (44/90 = 49%) of 90 selected studies, which is not surprising. In addition, Multiple + European doesn’t result in a higher success rate (defined as the proportion of the papers with significant associations found between the PRSs and the drug responses) compared to European + European (9/14 = 64% versus 36/44 = 82%). One possible explanation is that adding non-European subjects may add more noise relative to the amount of additional information. When the TC population is non-European, a trans-ethnic PRS analysis method is needed since a traditional PRS built on European population has limited transferability across ancestry groups , and a PRS built on non-European GWAS might yield reduced prediction accuracy due to its small sample size. However, among the 90 articles identified, only one paper followed this trans-ethnic PRS strategy, and failed to find any significant association between PRSs and the drug response. Due to the limited exploration of trans-ethnic analysis in PGx applications, it may imply great potential opportunities for applying cross-population analysis in future PGx studies. A new trans-ethnic PGx PRS method (PRS-PGx-Bayesx) for cross-population PRS analysis in PGx GWAS A few PRS methods have been published for trans-ethnic analysis in disease genetics, which can be adapted to the PGx setting. A brief description of these methods is listed below, using two populations, EUR and EAS, as an example. CT-Meta: C + T based on Meta-GWAS summary statistics, which is calculated by aggregating two disease GWAS summary statistics from EUR and EAS together. Multi-ethnic PRS : [12pt]{minimal} $}_0= }_{}+(1- ) }_{}$ , which uses a grid to search the optimal [12pt]{minimal} $$ using cross-validation. PRS-CSx : a method extended from PRS-CS method for trans-ethnic analysis via a shared continuous shrinkage prior across different populations. However, to our knowledge, no method has been adapted to trans-ethnic PGx PRS analysis yet. In this study, we propose a novel method (PRS-PGx-Bayesx) which is an extension of the PRS-PGx-Bayes method for the cross-population PRS construction and analysis. Consider [12pt]{minimal} $$ high-dimensional Bayesian regression models of [12pt]{minimal} $$ patients and [12pt]{minimal} $$ SNPs from [12pt]{minimal} $$ studies (or populations in the trans-ethnic GWAS case): [12pt]{minimal} $$ {}_{}={}_{}{}_{}+({}_{}}_{}){}_{}+{}_{},{}_{} (0,{}_{}^2),=1,, , $$ where [12pt]{minimal} ${}_{},{}_{},{}_{}$ denote the drug response, the binary treatment assignment, and the [12pt]{minimal} $ $ genotype matrix in study [12pt]{minimal} $$ , respectively. [12pt]{minimal} ${}_{}$ and [12pt]{minimal} ${}_{}$ are the prognostic and predictive effects of SNP [12pt]{minimal} $$ in study [12pt]{minimal} $$ , respectively. The regression coefficient [12pt]{minimal} ${}_{}=({}_{},{}_{})$ is assumed to be random in the PRS-PGx-Bayesx method, following a prior distribution: [12pt]{minimal} $$(}_{}\\{}{}_{}) (0,_{}^2}{{}_{}} {}_{}),\ {}_{}=[}_{}& {}_{}_{}{}_{}}\\{}{}_{}_{}{}_{}}& {}_{}],$$ [12pt]{minimal} $$ controls the overall degree of shrinkage, while [12pt]{minimal} ${}_{}$ and [12pt]{minimal} ${}_{}$ control the marker-specific degree of shrinkage. Both [12pt]{minimal} $$ and [12pt]{minimal} ${}_{},{}_{}$ are shared across all [12pt]{minimal} $$ studies. Further, assume the residual variance [12pt]{minimal} ${}_{}^2$ follows a non-informative scale-invariant Jeffreys prior, that is, [12pt]{minimal} $({}_{}^2) 1/{}_{}^2$ . As suggested by Zhai et al . , we propose to use the hierarchical half-t prior on the variance–covariance matrix [12pt]{minimal} ${}_{}$ : [12pt]{minimal} & {}_{}}^{-1}({}_{},2+1),\ {}_{}=4(}_{}& 0\\{}0& {}_{}),\\ & {}_{} ({}_1,1),{}_{} ({}_2,1), [12pt]{minimal} ${}^{-1}({}_{},2+1)$ denotes the inverse Wishart distribution with scale matrix [12pt]{minimal} ${}_{}$ and [12pt]{minimal} $$ degrees of freedom, and G denotes the Gamma distribution. Given the above prior information, we can derive posterior distributions: [12pt]{minimal} &{}_{}\!\! ({}_{},{}_{}), \ {}_{}\!=\!}_{}}{_{}^2}{}_{}{}}_{},\!\ {}_{}\!=\!}^2}{{}_{}}{({}_{}\!+\!{}^{-1})}^{-1}\\ &{}_{}={}_{}^{ }{}_{}/{}_{}=[({}_{},{}_{})& ({}_{},{}_{} )\\{}({}_{} ,{}_{})& ({}_{} ,{}_{} )],\\ & \ {}_{}=[{}_{}\ {}_{}}_{}] =[ & \!\!\!\! \\{} & \!\!\!\! ], =({}_{}), =({}_{}),\\ & =({}_{}}{}_{}}){}_{}^2 \\ & (+}_{}}{2},}_{}}{2}[1-2{}}_{}^{ }{}_{}+{}_{}^{}({}_{}+{}^{-1}){}_{}]),\\ & \ =\ }_{} ({}_{},=1,, )\\ & }^{-1}({}_{}+{}_{},2++1),\ {}_{}=_{=1}^{}}_{}}{_{}^2}[}_{}^2& {}_{}{}_{}\\{}{}_{}{}_{}& {}_{}^2 \!]\\ & {}_{}}_{} (+{}_1+, +}{_{}(1-{}_{}^2)}),\\&{}_{}}_{} (+{}_2+, +}{_{}(1-{}_{}^{2.})}) It is worth noting that, as derived in Zhai et al . , LD reference panel of population k is required to calculate [12pt]{minimal} ${}_{}$ , for each [12pt]{minimal} $=1,, $ . Detailed theoretical derivation is provided in Supplementary Method B, and the detailed algorithm of PRS-PGx-Bayesx method is summarized in . We ran extensive simulations to compare the proposed PRS-PGx-Bayesx method with existing trans-ethnic PRS methods mentioned above. Simulation studies for evaluating the proposed trans-ethnic PGx PRS method (PRS-PGx-Bayesx) In this section, we performed extensive simulation studies to compare performances of multiple trans-ethnic methods with single-ethnic methods when the TC population is either European (EUR) or East Asian (EAS). Simulation studies were performed with simulated genotype data using sim1000G software for both EUR and EAS. We used a vector of parameters [12pt]{minimal} $=({}_1,{}_2,{}_1,{}_2,{}_1,{}_2,{}_1,{}_2)$ to denote the underlying true prognostic ( [12pt]{minimal} $, $ )/predictive ( [12pt]{minimal} $, $ ) effect size in base ( [12pt]{minimal} $, $ )/target ( [12pt]{minimal} $, $ ) cohort of European (with index 1)/non-European (with index 2) population. Following the simulation setup by Ruan et al . , we further assumed [12pt]{minimal} $$ follows a multivariate-normal distribution with a well-defined variance–covariance matrix [12pt]{minimal} $ ({}_{},{}_{},{}_{})$ , where [12pt]{minimal} ${}_{}$ measures the correlation between prognostic and predictive effects, [12pt]{minimal} ${}_{}$ measures the effect correlation between European and non-European populations, [12pt]{minimal} ${}_{}$ measures the effect correlation between BC and TC. Details of the data generation process are provided in Supplementary Method C. shows that under our simulation setup (i.e. EUR-EAS effect correlation [12pt]{minimal} ${}_{}=0.5$ ), constructing PRSs using EUR alone outperforms using EAS alone even when the TC is EAS. And using EAS in the BC to predict EUR TC results in the smallest [12pt]{minimal} ${}^2$ and the largest interaction p-value. When the TC is EUR, using EUR + EAS in the BC has limited improvements compared to using EUR alone due to the small sample size of EAS and the moderate correlation between EUR and EAS. On the other hand, when the TC is EAS, trans-ethnic analysis of EUR + EAS is superior to using EAS alone clearly by incorporating the information of EUR population. Lastly, PRS-PGx-Bayesx method is superior to all the other methods for drug response prediction across different proportions of causal variants. For sensitivity analysis, we considered scenarios when the EUR-EAS effect correlation is low ( [12pt]{minimal} ${}_{}=0.1$ ; ) and when the EUR-EAS effect correlation is high ( [12pt]{minimal} ${}_{}=0.9$ ; ). When [12pt]{minimal} ${}_{}=0.1$ , using EUR for the prediction of EAS and using EAS for the prediction of EUR both retained the lowest prediction accuracy. Furthermore, trans-ethnic predictions are not necessarily superior to single-ethnic ones, since integrating EAS in the BC to predict EUR in the TC may include noise rather than signals, and vice versa. When [12pt]{minimal} ${}_{}=0.9$ , all approaches have better performance. Both and indicate that our proposed novel PRS-PGx-Bayesx still outperforms other methods. PGx PRS applications could be classified into three categories based on the BC ancestry: European, non-European (e.g. Asian, African) and Multiple (which is the mixture of, in most cases, European and TC non-European population); and three categories based on the TC ancestry: European, non-European and Multiple. No published articles were found to build PRSs with non-European ancestry alone in the BC unless the TC population is also non-European (the percentage is 8/90 = 9% in ). On the other hand, applying a PRS built from European ancestry to the European population in the TC (which is referred to as ‘European + European’) remains the largest proportion (44/90 = 49%) of 90 selected studies, which is not surprising. In addition, Multiple + European doesn’t result in a higher success rate (defined as the proportion of the papers with significant associations found between the PRSs and the drug responses) compared to European + European (9/14 = 64% versus 36/44 = 82%). One possible explanation is that adding non-European subjects may add more noise relative to the amount of additional information. When the TC population is non-European, a trans-ethnic PRS analysis method is needed since a traditional PRS built on European population has limited transferability across ancestry groups , and a PRS built on non-European GWAS might yield reduced prediction accuracy due to its small sample size. However, among the 90 articles identified, only one paper followed this trans-ethnic PRS strategy, and failed to find any significant association between PRSs and the drug response. Due to the limited exploration of trans-ethnic analysis in PGx applications, it may imply great potential opportunities for applying cross-population analysis in future PGx studies. A few PRS methods have been published for trans-ethnic analysis in disease genetics, which can be adapted to the PGx setting. A brief description of these methods is listed below, using two populations, EUR and EAS, as an example. CT-Meta: C + T based on Meta-GWAS summary statistics, which is calculated by aggregating two disease GWAS summary statistics from EUR and EAS together. Multi-ethnic PRS : [12pt]{minimal} $}_0= }_{}+(1- ) }_{}$ , which uses a grid to search the optimal [12pt]{minimal} $$ using cross-validation. PRS-CSx : a method extended from PRS-CS method for trans-ethnic analysis via a shared continuous shrinkage prior across different populations. However, to our knowledge, no method has been adapted to trans-ethnic PGx PRS analysis yet. In this study, we propose a novel method (PRS-PGx-Bayesx) which is an extension of the PRS-PGx-Bayes method for the cross-population PRS construction and analysis. Consider [12pt]{minimal} $$ high-dimensional Bayesian regression models of [12pt]{minimal} $$ patients and [12pt]{minimal} $$ SNPs from [12pt]{minimal} $$ studies (or populations in the trans-ethnic GWAS case): [12pt]{minimal} $$ {}_{}={}_{}{}_{}+({}_{}}_{}){}_{}+{}_{},{}_{} (0,{}_{}^2),=1,, , $$ where [12pt]{minimal} ${}_{},{}_{},{}_{}$ denote the drug response, the binary treatment assignment, and the [12pt]{minimal} $ $ genotype matrix in study [12pt]{minimal} $$ , respectively. [12pt]{minimal} ${}_{}$ and [12pt]{minimal} ${}_{}$ are the prognostic and predictive effects of SNP [12pt]{minimal} $$ in study [12pt]{minimal} $$ , respectively. The regression coefficient [12pt]{minimal} ${}_{}=({}_{},{}_{})$ is assumed to be random in the PRS-PGx-Bayesx method, following a prior distribution: [12pt]{minimal} $$(}_{}\\{}{}_{}) (0,_{}^2}{{}_{}} {}_{}),\ {}_{}=[}_{}& {}_{}_{}{}_{}}\\{}{}_{}_{}{}_{}}& {}_{}],$$ [12pt]{minimal} $$ controls the overall degree of shrinkage, while [12pt]{minimal} ${}_{}$ and [12pt]{minimal} ${}_{}$ control the marker-specific degree of shrinkage. Both [12pt]{minimal} $$ and [12pt]{minimal} ${}_{},{}_{}$ are shared across all [12pt]{minimal} $$ studies. Further, assume the residual variance [12pt]{minimal} ${}_{}^2$ follows a non-informative scale-invariant Jeffreys prior, that is, [12pt]{minimal} $({}_{}^2) 1/{}_{}^2$ . As suggested by Zhai et al . , we propose to use the hierarchical half-t prior on the variance–covariance matrix [12pt]{minimal} ${}_{}$ : [12pt]{minimal} & {}_{}}^{-1}({}_{},2+1),\ {}_{}=4(}_{}& 0\\{}0& {}_{}),\\ & {}_{} ({}_1,1),{}_{} ({}_2,1), [12pt]{minimal} ${}^{-1}({}_{},2+1)$ denotes the inverse Wishart distribution with scale matrix [12pt]{minimal} ${}_{}$ and [12pt]{minimal} $$ degrees of freedom, and G denotes the Gamma distribution. Given the above prior information, we can derive posterior distributions: [12pt]{minimal} &{}_{}\!\! ({}_{},{}_{}), \ {}_{}\!=\!}_{}}{_{}^2}{}_{}{}}_{},\!\ {}_{}\!=\!}^2}{{}_{}}{({}_{}\!+\!{}^{-1})}^{-1}\\ &{}_{}={}_{}^{ }{}_{}/{}_{}=[({}_{},{}_{})& ({}_{},{}_{} )\\{}({}_{} ,{}_{})& ({}_{} ,{}_{} )],\\ & \ {}_{}=[{}_{}\ {}_{}}_{}] =[ & \!\!\!\! \\{} & \!\!\!\! ], =({}_{}), =({}_{}),\\ & =({}_{}}{}_{}}){}_{}^2 \\ & (+}_{}}{2},}_{}}{2}[1-2{}}_{}^{ }{}_{}+{}_{}^{}({}_{}+{}^{-1}){}_{}]),\\ & \ =\ }_{} ({}_{},=1,, )\\ & }^{-1}({}_{}+{}_{},2++1),\ {}_{}=_{=1}^{}}_{}}{_{}^2}[}_{}^2& {}_{}{}_{}\\{}{}_{}{}_{}& {}_{}^2 \!]\\ & {}_{}}_{} (+{}_1+, +}{_{}(1-{}_{}^2)}),\\&{}_{}}_{} (+{}_2+, +}{_{}(1-{}_{}^{2.})}) It is worth noting that, as derived in Zhai et al . , LD reference panel of population k is required to calculate [12pt]{minimal} ${}_{}$ , for each [12pt]{minimal} $=1,, $ . Detailed theoretical derivation is provided in Supplementary Method B, and the detailed algorithm of PRS-PGx-Bayesx method is summarized in . We ran extensive simulations to compare the proposed PRS-PGx-Bayesx method with existing trans-ethnic PRS methods mentioned above. In this section, we performed extensive simulation studies to compare performances of multiple trans-ethnic methods with single-ethnic methods when the TC population is either European (EUR) or East Asian (EAS). Simulation studies were performed with simulated genotype data using sim1000G software for both EUR and EAS. We used a vector of parameters [12pt]{minimal} $=({}_1,{}_2,{}_1,{}_2,{}_1,{}_2,{}_1,{}_2)$ to denote the underlying true prognostic ( [12pt]{minimal} $, $ )/predictive ( [12pt]{minimal} $, $ ) effect size in base ( [12pt]{minimal} $, $ )/target ( [12pt]{minimal} $, $ ) cohort of European (with index 1)/non-European (with index 2) population. Following the simulation setup by Ruan et al . , we further assumed [12pt]{minimal} $$ follows a multivariate-normal distribution with a well-defined variance–covariance matrix [12pt]{minimal} $ ({}_{},{}_{},{}_{})$ , where [12pt]{minimal} ${}_{}$ measures the correlation between prognostic and predictive effects, [12pt]{minimal} ${}_{}$ measures the effect correlation between European and non-European populations, [12pt]{minimal} ${}_{}$ measures the effect correlation between BC and TC. Details of the data generation process are provided in Supplementary Method C. shows that under our simulation setup (i.e. EUR-EAS effect correlation [12pt]{minimal} ${}_{}=0.5$ ), constructing PRSs using EUR alone outperforms using EAS alone even when the TC is EAS. And using EAS in the BC to predict EUR TC results in the smallest [12pt]{minimal} ${}^2$ and the largest interaction p-value. When the TC is EUR, using EUR + EAS in the BC has limited improvements compared to using EUR alone due to the small sample size of EAS and the moderate correlation between EUR and EAS. On the other hand, when the TC is EAS, trans-ethnic analysis of EUR + EAS is superior to using EAS alone clearly by incorporating the information of EUR population. Lastly, PRS-PGx-Bayesx method is superior to all the other methods for drug response prediction across different proportions of causal variants. For sensitivity analysis, we considered scenarios when the EUR-EAS effect correlation is low ( [12pt]{minimal} ${}_{}=0.1$ ; ) and when the EUR-EAS effect correlation is high ( [12pt]{minimal} ${}_{}=0.9$ ; ). When [12pt]{minimal} ${}_{}=0.1$ , using EUR for the prediction of EAS and using EAS for the prediction of EUR both retained the lowest prediction accuracy. Furthermore, trans-ethnic predictions are not necessarily superior to single-ethnic ones, since integrating EAS in the BC to predict EUR in the TC may include noise rather than signals, and vice versa. When [12pt]{minimal} ${}_{}=0.9$ , all approaches have better performance. Both and indicate that our proposed novel PRS-PGx-Bayesx still outperforms other methods. We categorized the 90 selected papers into three categories: those using a single trait, those exploring multiple traits one at a time, and those building PRSs by aggregating multiple traits together. indicates that the three approaches had similar success rates of 82% (53/65), 85% (17/20) and 80% (4/5), respectively. This is possibly due to the very small number of the multi-trait PRS papers. With the success of leveraging information from multiple related traits for signal detection in GWAS and WES (SNP-set analysis) , it is also appealing to use multiple traits to increase the power of PRS analysis for drug response prediction and patient stratification in PGx GWAS. Our review results in show that some PGx GWAS are starting to use multi-trait PRS methods. However, it remains unclear which methods are robust under different genetic architectures in PGx studies. There are a variety of multi-trait PRS analysis methods in literature. For example, the regression-based methods fit a linear or a penalized regression model with individual PRSs from multiple traits as predictors; the meta-GWAS-based methods construct the PRSs using meta-GWAS summary statistics by aggregating individual GWAS summary statistics from multiple traits; the BLUP-based method (wMT-SBLUP) combines the single-trait predictors with BLUP properties in a weighted index calculated from genome-wide SNP heritability, genetic correlation between traits, and expected squared correlations between the phenotype and BLUP predictors; the PCA-based method (mtPRS-PCA) combines PRSs from multiple traits with weights obtained from performing PCA on the genetic correlation matrix, etc. In addition, a more robust method mtPRS-O combines several complimentary multi-trait PRS methods via Cauchy Combination Test. With a variety of multi-trait PRS analysis methods developed, there is an urgent need to systematically evaluate their robustness to different genetic architectures. Zhai et al . provided a comprehensive simulation framework and ran extensive simulations to systematically compare most of the multi-trait PRS methods under various genetic architectures covering different effect directions, signal sparseness and cross-trait correlation structures. briefly summarizes the main features of the existing multi-trait PRS methods in terms of their pros, cons and performances in simulation studies from and our additional simulation analyses (the detailed results are not shown). In terms of the PRS association test, mtPRS-O is the most robust method that achieves optimally larger power compared with other multi-trait methods; however, in terms of the drug response prediction, no single method uniformly outperforms the others across all scenarios. In summary, integrating multiple genetically correlated traits from disease GWAS does increase the power for PRS based drug response prediction and patient stratification in PGx GWAS. In this study, we systematically review 90 PRS application papers in PGx GWAS for drug response prediction and patient stratification. We summarize 23 PRS methodologies from these 90 PRS application papers and three other PRS method review papers. From this review, we show that although both PRS application and PRS method development have progressed rapidly in PGx fields, the PRS analysis in PGx GWAS still faces multiple challenges from the PRS analysis method standpoint. In this paper, we mainly dive into three key challenges: (i) the lack of the knowledge on choosing PGx, disease or both GWAS summary statistics in the BC for PRS construction; (ii) the Eurocentric or trans-ethnic bias in the cross-population PRS prediction; (iii) the small sample size, low power and more complex PRS modeling in PGx GWAS. We further propose two new PRS analysis strategies and methods to tackle these challenges. For the first challenge, under PGx settings, we compare traditional disease GWAS-based methods (C + T, Lassosum, PRS-CS) and PGx GWAS-based method (PRS-PGx-Bayes), with our proposed novel strategy of leveraging both disease and PGx GWAS summary statistics to construct PRS, PRS-PGx-Bayes (Disease + PGx). It is an extension of PRS-PGx-Bayes by replacing the prognostic effect size estimates from PGx GWAS summary statistics with the effect size estimates from disease GWAS summary statistics, which is simple and easy to be implemented. In the simulation studies, we find that the combination of disease GWAS based PRS analysis methods and additional information from PGx GWAS improves the drug response prediction accuracy. Moreover, PRS-PGx-Bayes (Disease + PGx) generally outperforms the other methods especially when the sample size of PGx GWAS in the BC is not large enough. Intuitively, using PGx GWAS summary statistics allows us to model the prognostic effect (i.e. genotype main effect) and predictive effect (i.e. genotype-by-treatment interaction effect) simultaneously, while including disease GWAS summary statistics from disease genetics studies generally provides a much larger sample size than PGx studies, which improves the modeling of the prognostic effect component of a PGx PRS. For the second challenge about the Eurocentric or trans-ethnic bias, although there are already some PRS methods in literature from the disease genetics field, their performance in PGx GWAS setting is unknown. To reduce the Eurocentric or trans-ethnic prediction bias, we propose a novel method, PRS-PGx-Bayesx, which is an extension of PRS-PGx-Bayes for trans-ethnic analysis by updating the global–local shrinkage parameters with cross-population information. Specifically, we assume that the variance–covariance matrix of prognostic and predictive effects of SNP j ( [12pt]{minimal} ${}_{}$ ) is shared across all K studies/populations, and the posterior distribution of [12pt]{minimal} ${}_{}$ is derived conditional on all K studies. Our simulation studies indicate that PRS-PGx-Bayesx is superior to other methods regardless of the correlation between European and non-European populations. For the third challenge, we focus on reviewing the methods which integrate multiple traits during PGx PRS construction and provide a natural way of increasing power for PRS analysis in PGx GWAS. In addition to the review of the applications of mtPRS methods in PGx GWAS, we further refer to our previous research work and provide a comprehensive summary of the performance of the existing multi-trait performance in PGx GWAS. Our study has some limitations. First, in this review, the study screening was conducted by two independent reviewers (Z.S. and S.J.) to minimize selection bias. However, there is currently no risk of bias assessment tool for PRS related reviews . Therefore, some evidence may still be missing due to publication bias (e.g. negative results are less likely to be reported) . Second, our efforts of tackling the first challenge demonstrate the fact that leveraging both disease and PGx GWAS summary statistics may further improve the power of PGx PRS analysis. However, it would be more challenging to obtain both disease GWAS and PGx GWAS with similar (or genetic correlated) phenotypes. In addition, it is difficult to evaluate the genetic correlation relationship between a disease phenotype and a relevant drug response phenotype (i.e. by calculating their genetic correlation). Our proposed solution by replacing [12pt]{minimal} ${}_{}$ of the PGx GWAS with [12pt]{minimal} ${}_{}$ from the disease GWAS is simple and intuitive. However, there is a gap between the prognostic effect sizes from disease GWAS and those from PGx GWAS. Therefore, our simple strategy may not increase the prediction accuracy, especially when the sample size of the PGx GWAS is large enough (we have demonstrated this point in the simulation results summarized in ). In theory, more complicated models can be constructed for further increasing the PRS prediction performance. However, they may face additional barriers in clinical interpretation and implementation. In addition, the PRS-PGx-Bayesx method we develop for tackling the second challenge is a Bayesian based method, which requires a relatively longer computational time (as shown in ). Therefore, it is recommended to perform PRS-PGx-Bayesx method by LD blocks under the high performance computing or parallel computing environment. Besides, PRS-PGx-Bayesx is an extension from PRS-PGx-Bayes for cross-population analysis, and it shares the same challenges of the PRS-PGx-Bayes method. For example, PRS-PGx-Bayesx uses one of the most popular continuous shrinkage priors, the global–local scale mixtures of normals (i.e. the Horseshoe prior), for effect size shrinkage. There are other candidate priors (e.g. the spike-and-slab prior and the Normal-Gamma shrinkage prior), and the currently existing Bayesian methods do not have a systematic way to determine the optimal prior. Third, we focus our study on the three main challenges in current PRS analyses under PGx settings. Other challenges mentioned before (e.g. the lack of guidance in clinical interpretation of PGx polygenic models) are beyond the scope of this paper, but have been discussed in other PRS PGx review papers . Fourth, phenotypic characterization also presents a unique challenge within pharmacogenomic research as many drug outcomes are difficult to measure quantitatively . Discrepant phenotyping may lead to different polygenic models being constructed depending on the definition of the drug outcome since they will result in different effective sample sizes and power for PRS analyses and predictions. Finally, with the rapid development of PRS applications and methodologies as we show in this paper, more efforts are needed to accelerate the PGx PRS application with clinical utilities. Further research should now aim at comparing the drug response prediction accuracy with and without the use of PRSs to demonstrate the benefit of PRSs in PGx applications. As this field of research continues to grow, we believe that there are many promising applications for the use of PRSs in the context of PGx and precision medicine to further improve treatment outcomes. Our efforts of reviewing the current progress of PRS applications and methods in PGx GWAS, identifying the current main challenges and proposing new analysis strategies and methods to further overcome them in PGx PRS applications help move a step forward in the field and may accelerate the translation of PRSs to clinical practice. Key Points The application of PRSs in PGx GWAS has begun to show great potentials for improving patient stratification and drug response prediction. However, applying PRSs to PGx GWAS faces multiple challenges including (i) the lack of knowledge about whether PGx, disease or both GWAS/variants should be used in the BC; (ii) the Eurocentric or trans-ethnic bias; (iii) small sample sizes in PGx GWAS with low power and (iv) the more complex PRS modeling while handling both prognostic and predictive effects simultaneously, etc. We conduct a systematic review of current progress in both PRS applications and PRS method developments in PGx GWAS to gain insights about the general trends, challenges and possible solutions. We further propose (i) a novel PRS application strategy by leveraging both PGx and disease GWAS summary statistics in the BC for PRS construction in PGx PRS applications and (ii) a new Bayesian method (PRS-PGx-Bayesx) to reduce Eurocentric or across-population PRS prediction bias. Our extensive simulations demonstrate their advantages over existing PRS methods applied in PGx GWAS. Our systematic review and methodology research work in this paper not only highlights current gaps and key considerations while applying PRS methods to PGx GWAS, but also provides possible solutions for better PGx PRS applications and future research. The application of PRSs in PGx GWAS has begun to show great potentials for improving patient stratification and drug response prediction. However, applying PRSs to PGx GWAS faces multiple challenges including (i) the lack of knowledge about whether PGx, disease or both GWAS/variants should be used in the BC; (ii) the Eurocentric or trans-ethnic bias; (iii) small sample sizes in PGx GWAS with low power and (iv) the more complex PRS modeling while handling both prognostic and predictive effects simultaneously, etc. We conduct a systematic review of current progress in both PRS applications and PRS method developments in PGx GWAS to gain insights about the general trends, challenges and possible solutions. We further propose (i) a novel PRS application strategy by leveraging both PGx and disease GWAS summary statistics in the BC for PRS construction in PGx PRS applications and (ii) a new Bayesian method (PRS-PGx-Bayesx) to reduce Eurocentric or across-population PRS prediction bias. Our extensive simulations demonstrate their advantages over existing PRS methods applied in PGx GWAS. Our systematic review and methodology research work in this paper not only highlights current gaps and key considerations while applying PRS methods to PGx GWAS, but also provides possible solutions for better PGx PRS applications and future research. PGxPRS_Review_ms_Supp_Final_bbad470 Table_S1_S2_bbad470
Comprehensive assessments and related interventions to enhance the long-term outcomes of child, adolescent and young adult cancer survivors – presentation of the CARE for CAYA-Program study protocol and associated literature review
db421a11-3c3d-447f-90b2-22928a850add
6945396
Preventive Medicine[mh]
Epidemiology Roughly 500,000 people receive a new cancer diagnosis every year in Germany, of which 2200 (0.4%) are under the age of 18 and 16,000 (3.0%) are between the ages of 19 and 39. This relatively small group of cancer patients under the age of 39 are called “CAYAs” (children, adolescents and young adults). While there are many differences among this group, including: the heterogeneity of cancer diagnosis, treatment protocol, current life situation; they also have a lot in common, for example the relatively high cure rates (> 80%), and aggressive multimodal treatment that increase the risk of long term sequelae . The most commonly diagnosed cancers in adolescent and young adult women (15 to 39 year old) are breast cancer (28%), melanoma (16%), thyroid cancer (11%) and cervical cancer (10%) . While in 15 to 39 year old men, germ cell tumours (34%), melanoma (11%), Hodgkin’s lymphoma (8%) and non-Hodgkin’s lymphoma (6%) are the most prevalent cancers . In children (under the age of 15) leukaemia (33%), brain tumours (24%) and lymphoma (11%) are predominantly diagnosed . Long-term sequelae in CAYA cancer survivors Cancer treatment may cause immediate side effects occurring during or directly after treatment (e.g. haematological or gastrointestinal toxicities), which are generally detected immediately and treated with the respective supportive measures. However, the treatment may also cause late effects, which may not become apparent until years or even decades later (e.g. cardiac toxicities or secondary cancers). The Childhood Cancer Survivor Study (CCSS) utilizing a long-term follow-up in 10,397 CAYAs found that two out of every three CAYAs have at least one treatment-related long-term toxicity with one out of three CAYAs developing a severe or life-threatening late effect . Disease or treatment related long-term toxicities may affect any organ, e.g. heart, lungs, gastrointestinal tract, kidneys and bladder, skin, eyes, brain, bones or the endocrine and reproductive systems, and are not necessarily confined to the organ of the original cancer diagnosis . Furthermore, psychosocial issues, for example the fear of recurrence, fear and anxiety concerning their future, depression, post-traumatic stress disorder (PTSD), long-term educational and work problems or social and behavioural difficulties are common problems . Physical long-term sequelae The most commonly reported long-term toxicities in cancer survivors are cardiovascular diseases like cardiomyopathy, chronic heart failure or valvular disorder, which occur with a five to 15-fold frequency, and at an earlier age, when compared to the general population . The individual risk for the development of cardiovascular disease is determined by treatment related factors (e.g. type, mode of administration and cumulative dose of chemotherapy and/or chest radiotherapy), and non-treatment related factors like lifestyle (e.g. smoking) or relevant co-morbidities (e.g. dyslipoproteinaemia or hypertension). Chest-directed radiotherapy is associated with an increased risk of myocardial infarction, congestive heart failure, valvular heart disease, and arrhythmias . Anthracycline chemotherapy increases the risk of heart failure . CAYAs exposed to prior anthracycline-based treatment and chest radiation have the highest treatment-related risk for cardiovascular diseases. Thus, aftercare focusing not only on tumour relapse or second cancer, but also improving modifiable lifestyle risk factors is of particular importance. CAYAs are more often obese when compared to siblings, especially after hypothalamic injury due to resection, radiotherapy or high doses of corticosteroids (e.g. after brain cancer or acute lymphoblastic leukaemia (ALL) treatment) . High incidence rates of diabetes mellitus and insulin resistance (roughly 50%) are reported after allogeneic haematopoietic stem cell transplantation (HSCT) or abdominal radiotherapy for solid tumours . Up to one in five CAYAs have problems with decreased bone mineral density due to the direct impact of the cancer itself (e.g. leukaemia), corticosteroid treatment, osteotoxic chemo- and/or radiotherapy, treatment-induced endocrine disorders (e.g. growth hormone deficiency or hypogonadism), malnutrition, physical impairment or reduced muscle strength . These aforementioned long-term effects may influence the lifestyle of CAYAs and therefore increase the risks of long-term side effects like cardiovascular diseases. Psychological and social sequelae Due to the disturbance of the psychosocial development period during childhood, adolescence and young adulthood, CAYAs are particularly vulnerable to psychosocial problems . Although a cancer diagnosis clearly impacts the psychosocial situation at every age, the CAYA age is a critical period in life. Establishing identity, developing a sexual identity and a positive body image, as well as separating from parents, being around peers and (starting to) make decisions regarding career and employment, education and family are the typical concerns of young people transitioning from childhood to adulthood . Therefore, cancer and cancer-related issues (e.g. confrontation with mortality, changes in body image, dependence on parents, disruptions in social life and education / employment, loss of reproductive capacity) can be more stressful for cancer survivors than for healthy young adults . Therefore, compared to the general population, the risk for behavioural and educational problems are twice as high; and quality of life, mental well-being and life satisfaction are much lower in CAYAs with cancer . CAYAs often have difficulties with reintegration into school, work, education and everyday life that may lead to missing graduation and financial problems. Furthermore, not all cancer survivors are able to return back to work or school at all . About 72% of patients who were working, or in school, full-time before diagnosis returned to full-time work or school 15 to 35 months post diagnosis, but only 34% of previously part-time workers/students and 7% of homemakers returned back . In addition, young adult survivors of childhood allogeneic HSCT have high unemployment rates at all attained ages (18–22 (56%), 23–37 (53%) and 28–32 (68%) years) . When compared to the general population, CAYAs have more educational or other school problems (46% vs. 23%), including having to repeat a grade (21% vs. 9%) and developing a learning-disability (19% vs. 7%) or having to attend special-education programmes (20% vs. 8%) . CAYAs with central nervous system (CNS) tumours or leukaemia receiving CNS radiation are at a particularly high risk for problems at school . In addition, cancer history may influence social relationships and interactions. CAYAs tend to have less close friends (19% vs. 8%) and were less likely to use friends as confidants (58% vs. 67%) when compared to peers . Young adult cancer survivors are more likely to divorce or separate than same-age controls . Nearly 50% of CAYAs have reported financial distress, annual productivity loss, or debt accumulation due to treatment costs, or did not adhere to recommended prescription medication because of the uninsured costs . Furthermore, survivors of childhood cancer were at high risk for hospitalization, and spent an average of five times more days in hospital, when compared to controls . Major reasons for hospitalization among cancer survivors include diseases of the nervous system (19.1% of all excess hospitalizations), endocrine system (11.1%), digestive organs (10.5%) and respiratory system (10.0%) . Lifestyle and risky health behaviour of cancer survivors Although CAYAs have faced a severe life threatening disease in their early years, up to 35.8% of survivors will develop a risky health behaviour (sexual behaviour, tobacco, alcohol, or illicit drugs) . However, data comparing the risky behaviour to siblings or the general population remain inconsistent. Some studies report that cancer survivors smoke, consume alcohol and use illicit drugs at rates lower than siblings , but other studies found no difference or increased risky health behaviour among AYA survivors of childhood cancer . A recent meta-analysis of the available literature showed that 22% of survivors smoked, 20% were binge drinkers, and 15% used drugs . In addition to risky behaviour, survivors tend to have an unhealthy lifestyle, with only 10% following a healthy lifestyle . A large number of cancer survivors are overweight (58%), eat less than the recommended five servings of fruits and vegetables per day (82%) or fail to do any sport activities (55%) . In the INAYA1 (“Improved Nutrition in AYAs”) trial, 74 and 22% of CAYAs had a moderate and bad nutritional behaviour, respectively . Similar results were shown in the INAYA2 trial with 66 and 14% having a moderate or bad nutrition behaviour (presentation DGHO 2018) . Additionally, 15% of CAYAs consume an excessive amount of salt (≥ 10 g per day). Both studies showed that only a few childhood cancer survivors met the nutrition recommendations of the German Nutrition Society (DGE) ( www.dge.de/10regeln ). Similar results were found in American childhood cancer survivors whose mean HEI-2010 was about 50% of the maximum score . Interestingly, long-term survivors (time from diagnosis ≥10 years) had a significantly lower HEI-2010 than recent survivors (time from diagnosis < 5 years) ( P = 0.047). CAYAs struggle to adhere to the consumption of green vegetables and beans, total vegetables and whole fruits. No survivor met the guidelines for dietary fibre and potassium intake and only a few met the guidelines for vitamin D, sodium, calcium, and saturated fat intake. The average for saturated fat and for sodium was 115 and 143% respectively . Another relevant factor of a healthy lifestyle is regular physical activity. Previous studies have shown, that CAYAs were insufficiently active compared with controls and had a low motor performance at the end of the acute treatment phase , with serious reductions in motor performance within two years after bone tumour treatment. The positive impact of physical activity on the risk for long-term sequelae, has been shown in a variety of retrospective studies, with very few focusing on CAYAs. In HSCT survivors, correlations between increased physical activity levels (endurance) and lower waist circumference, lower percent fat mass and greater insulin sensitivity were noted . A prevalent and distressing symptom in children and adolescents with cancer, and in those who have undergone HSCT, is fatigue. A multidisciplinary group of experts in paediatric oncology and fatigue, developed a clinical practice guideline for management of fatigue with the focus on physical activity, relaxation and mindfulness . A report from the CCSS noted that Hodgkin’s lymphoma survivors (median, age 31.2 years) regularly undergoing vigorous exercise (≥ 9 metabolic equivalent [MET] hours/week [h/wk]) had a significantly lower risk of treatment-related cardiovascular events than survivors not meeting the guidelines for vigorous intensity exercise. For survivors who reported ≥9 MET-h/wk., the cumulative incidence of any cardiovascular event was 5.2% at ten years from baseline. In comparison, the cumulative incidence for survivors who reported 0 MET-h/wk. had more than doubled to 12.2% . By analysing 15,450 adult cancer survivors (median, age 25.9 years) from the CCSS cohort, at 15 years from baseline, the increase in vigorous exercise over an eight-year period was associated with a significant reduction of 40% in the risk of all-cause mortality, when compared to the survivors who only maintained low levels of exercise (3 to 6 MET-h/wk) . Lifestyle interventions Improving lifestyle behaviour is key to reducing the risk for cardiovascular long-term toxicities in particular. Given the fact that a sedentary lifestyle, lack of physical activity and poor nutrition increase the risk factors for cardiovascular diseases , there is an unused opportunity to improve the young cancer survivors’ risk profile. Thus, several interventional trials focused on CAYAs have since been undertaken. The INAYA1 trial aimed to evaluate the feasibility and the impact of an intensified nutrition counselling programme targeted to the at risk subsection of CAYAs . Nutritional behaviour was improved in week 12 by intensified nutrition counselling and a good, moderate and bad nutritional intake was seen in 48, 52 and 0% of CAYAs, compared to 4, 74 and 22% at baseline, respectively. No clinically relevant improvement was seen in quality of life, Waist-Hip Ratio (WHR), Body Mass Index (BMI) and blood pressure. The consecutive INAYA2 trial was able to show a decrease of sodium intake. Despite the INAYA trials, there is still a lack of nutritional interventions for young cancer survivors. The Survivor Health and Resilience Education (SHARE) Program focused on bone health behaviours among adolescent survivors of childhood cancer (median, age 14.2 years). This intervention had a significant short-term impact at one-month follow-up. Compared with the control group, participants of the intervention group had higher milk consumption, calcium supplementation and dietary calcium intake . Another double-blind randomized controlled trial (median age 17 years) focusing on bone health of long-term survivors of childhood ALL used calcium and cholecalciferol supplementation (or a placebo). This trial came to the conclusion that cholecalciferol and calcium supplementation provided no additional benefit to nutritional counselling for improving lumbar spine bone mineral density among adolescent and young adult survivors of ALL . Regarding the physical activity of CAYAs, only a few randomized controlled trials with very small sample sizes exist so far. These studies can be classified in three main categories: home-based, web-based or supervised physical activity interventions. A home-based intervention with asymptomatic childhood acute lymphoblastic leukaemia survivors included a three-month exercise programme and reported an improved cardiac function, in terms of a significant improvement of the attenuated left ventricle diastolic function . Another home-based intervention where participants met physical activity guidelines and wore a motivational activity tracker over a six-month period led to an increased, but not statistically significant, moderate to vigorous physical activity and maximum oxygen uptake (VO 2 max ) . A similar intervention focusing on a ten-week home-based exercise programme with feedback from a pedometer, and supported by a counsellor, led to a significant decrease in fatigue and significant increase in daily physical activity (steps per day) . Online interventions focused on promoting health behaviour via email over a six-week period or using a physical activity website for 12 weeks . Though, these studies determined high feasibility and acceptability, physical activity levels did not change or increase significantly. A Facebook-based physical activity intervention over a three-month period increased moderate to vigorous physical activity and led to significant weight loss . Supervised interventions containing a physical activity-educational and/or exercise intervention in a group setting improved physical activity, quality of life and also cardiovascular, physical and metabolic outcomes of cardiovascular diseases . In our clinic we conducted the MAYA trial (Motivate AYA, presentation DGHO 2018, publication in process), where we randomly evaluated the effect of a structured intervention on physical activity and quality of life in CAYAs with cardiovascular risk factors. CAYAs of the intervention group increased the amount of vigorous intensity activity from baseline to week 12 and reduced the amount of time spent sitting. Psycho-oncological interventions Several behavioural intervention techniques are used to address mental distress in cancer survivors, including the transtheoretical Model (TTM), cognitive behavioural therapy (CBT) and motivational interviewing (MI). Current literature is still inconclusive as to which of these show the best effect . MI seems to be a promising approach as it targets patients that feel ambivalent about a certain behaviour, knowing on the one hand about the disadvantages, and on the other hand seeing the benefits of said behaviour. Therefore it is compatible with a variety of problems that CAYAs feel ambivalent about, such as classic health behaviours like smoking cessation, alcohol consumption, physical activity and nutrition. Although initially developed to address addiction, nowadays MI is widely used across the medical field to address a broader range of behaviours . MI uses reflective listening and a client-centred approach to help the patient explore their own motivation to change and their way of planning and realizing said changes. Further techniques used in MI are the expression of empathy, the development of discrepancies between the actual behaviour and the patients’ goals, the avoidance of confrontation within the therapeutic relationship and the enhancement of optimism and self-efficacy . Therefore, CAYA specific topics, like changing their way of coping with cancer, of dealing with fear of reoccurrence or of coping with fatigue symptoms may also be addressed using MI techniques, despite the fact that scientific evidence in this regard is sparse. The existing evidence regarding MI in cancer survivors seems promising: Spencer et al. included 15 studies using MI in cancer survivors in their systematic review. They concluded that MI techniques seem to be effective - besides impacting health behaviours like nutrition and activity, MI may decrease patient stress related to cancer and may enhance overall quality of life . Regarding fatigue and pain the evidence remains inconclusive. Survivorship programmes for CAYAs Follow-up care of CAYAs is challenging in itself, as it encompasses more than detection of cancer relapse programmes, which are necessary but so far rarely available 67% of CAYAs have no access to specialised CAYA aftercare . In the United States of America, patients post cancer are treated in survivorship clinics, but sadly there is no such centralized institution in Germany or Europe. Examples of prevention or support programmes for CAYAs in Germany include: OncoKids ( www.neu.onko-kids.de ), the Phönikks foundation ( www.phoenikks.de ), the Pancare network ( www.pancare.eu/en ), AYA parents ( www.khae.ovgu.de/SAYA.print ), JET trial ( www.uniklinikum-jena.de ), AYALE trial ( www.uniklinikum-leipzig.de ) and “Deutsche Stiftung für junge Erwachsene mit Krebs” ( www.junge-erwachsene-mit-krebs.de ). Programmes for young cancer survivors with the focus on lifestyle, health behaviour, particularly with regard to healthy diet and regular physical activity, are lacking. Also lacking are necessary conclusions, data about treating or preventing long-term effects, which are also heterogeneous and generally incomparable. There is a lack of randomized controlled trials dealing with the topic of our paper. A standardized follow-up care programme of CAYAs in Germany does not exist, especially with a focus on the long-term consequences of cancer survivorship. Based on the results of the aforementioned interventional trials, there is a dire need to establish a regular and comprehensive assessment, and related interventions, covering preventative lifestyle and psychological issues. This paper presents the first structured and randomized follow-up programme focusing on lifestyle and psychological consequences and appropriate interventions in CAYAs. Roughly 500,000 people receive a new cancer diagnosis every year in Germany, of which 2200 (0.4%) are under the age of 18 and 16,000 (3.0%) are between the ages of 19 and 39. This relatively small group of cancer patients under the age of 39 are called “CAYAs” (children, adolescents and young adults). While there are many differences among this group, including: the heterogeneity of cancer diagnosis, treatment protocol, current life situation; they also have a lot in common, for example the relatively high cure rates (> 80%), and aggressive multimodal treatment that increase the risk of long term sequelae . The most commonly diagnosed cancers in adolescent and young adult women (15 to 39 year old) are breast cancer (28%), melanoma (16%), thyroid cancer (11%) and cervical cancer (10%) . While in 15 to 39 year old men, germ cell tumours (34%), melanoma (11%), Hodgkin’s lymphoma (8%) and non-Hodgkin’s lymphoma (6%) are the most prevalent cancers . In children (under the age of 15) leukaemia (33%), brain tumours (24%) and lymphoma (11%) are predominantly diagnosed . Cancer treatment may cause immediate side effects occurring during or directly after treatment (e.g. haematological or gastrointestinal toxicities), which are generally detected immediately and treated with the respective supportive measures. However, the treatment may also cause late effects, which may not become apparent until years or even decades later (e.g. cardiac toxicities or secondary cancers). The Childhood Cancer Survivor Study (CCSS) utilizing a long-term follow-up in 10,397 CAYAs found that two out of every three CAYAs have at least one treatment-related long-term toxicity with one out of three CAYAs developing a severe or life-threatening late effect . Disease or treatment related long-term toxicities may affect any organ, e.g. heart, lungs, gastrointestinal tract, kidneys and bladder, skin, eyes, brain, bones or the endocrine and reproductive systems, and are not necessarily confined to the organ of the original cancer diagnosis . Furthermore, psychosocial issues, for example the fear of recurrence, fear and anxiety concerning their future, depression, post-traumatic stress disorder (PTSD), long-term educational and work problems or social and behavioural difficulties are common problems . Physical long-term sequelae The most commonly reported long-term toxicities in cancer survivors are cardiovascular diseases like cardiomyopathy, chronic heart failure or valvular disorder, which occur with a five to 15-fold frequency, and at an earlier age, when compared to the general population . The individual risk for the development of cardiovascular disease is determined by treatment related factors (e.g. type, mode of administration and cumulative dose of chemotherapy and/or chest radiotherapy), and non-treatment related factors like lifestyle (e.g. smoking) or relevant co-morbidities (e.g. dyslipoproteinaemia or hypertension). Chest-directed radiotherapy is associated with an increased risk of myocardial infarction, congestive heart failure, valvular heart disease, and arrhythmias . Anthracycline chemotherapy increases the risk of heart failure . CAYAs exposed to prior anthracycline-based treatment and chest radiation have the highest treatment-related risk for cardiovascular diseases. Thus, aftercare focusing not only on tumour relapse or second cancer, but also improving modifiable lifestyle risk factors is of particular importance. CAYAs are more often obese when compared to siblings, especially after hypothalamic injury due to resection, radiotherapy or high doses of corticosteroids (e.g. after brain cancer or acute lymphoblastic leukaemia (ALL) treatment) . High incidence rates of diabetes mellitus and insulin resistance (roughly 50%) are reported after allogeneic haematopoietic stem cell transplantation (HSCT) or abdominal radiotherapy for solid tumours . Up to one in five CAYAs have problems with decreased bone mineral density due to the direct impact of the cancer itself (e.g. leukaemia), corticosteroid treatment, osteotoxic chemo- and/or radiotherapy, treatment-induced endocrine disorders (e.g. growth hormone deficiency or hypogonadism), malnutrition, physical impairment or reduced muscle strength . These aforementioned long-term effects may influence the lifestyle of CAYAs and therefore increase the risks of long-term side effects like cardiovascular diseases. Psychological and social sequelae Due to the disturbance of the psychosocial development period during childhood, adolescence and young adulthood, CAYAs are particularly vulnerable to psychosocial problems . Although a cancer diagnosis clearly impacts the psychosocial situation at every age, the CAYA age is a critical period in life. Establishing identity, developing a sexual identity and a positive body image, as well as separating from parents, being around peers and (starting to) make decisions regarding career and employment, education and family are the typical concerns of young people transitioning from childhood to adulthood . Therefore, cancer and cancer-related issues (e.g. confrontation with mortality, changes in body image, dependence on parents, disruptions in social life and education / employment, loss of reproductive capacity) can be more stressful for cancer survivors than for healthy young adults . Therefore, compared to the general population, the risk for behavioural and educational problems are twice as high; and quality of life, mental well-being and life satisfaction are much lower in CAYAs with cancer . CAYAs often have difficulties with reintegration into school, work, education and everyday life that may lead to missing graduation and financial problems. Furthermore, not all cancer survivors are able to return back to work or school at all . About 72% of patients who were working, or in school, full-time before diagnosis returned to full-time work or school 15 to 35 months post diagnosis, but only 34% of previously part-time workers/students and 7% of homemakers returned back . In addition, young adult survivors of childhood allogeneic HSCT have high unemployment rates at all attained ages (18–22 (56%), 23–37 (53%) and 28–32 (68%) years) . When compared to the general population, CAYAs have more educational or other school problems (46% vs. 23%), including having to repeat a grade (21% vs. 9%) and developing a learning-disability (19% vs. 7%) or having to attend special-education programmes (20% vs. 8%) . CAYAs with central nervous system (CNS) tumours or leukaemia receiving CNS radiation are at a particularly high risk for problems at school . In addition, cancer history may influence social relationships and interactions. CAYAs tend to have less close friends (19% vs. 8%) and were less likely to use friends as confidants (58% vs. 67%) when compared to peers . Young adult cancer survivors are more likely to divorce or separate than same-age controls . Nearly 50% of CAYAs have reported financial distress, annual productivity loss, or debt accumulation due to treatment costs, or did not adhere to recommended prescription medication because of the uninsured costs . Furthermore, survivors of childhood cancer were at high risk for hospitalization, and spent an average of five times more days in hospital, when compared to controls . Major reasons for hospitalization among cancer survivors include diseases of the nervous system (19.1% of all excess hospitalizations), endocrine system (11.1%), digestive organs (10.5%) and respiratory system (10.0%) . Lifestyle and risky health behaviour of cancer survivors Although CAYAs have faced a severe life threatening disease in their early years, up to 35.8% of survivors will develop a risky health behaviour (sexual behaviour, tobacco, alcohol, or illicit drugs) . However, data comparing the risky behaviour to siblings or the general population remain inconsistent. Some studies report that cancer survivors smoke, consume alcohol and use illicit drugs at rates lower than siblings , but other studies found no difference or increased risky health behaviour among AYA survivors of childhood cancer . A recent meta-analysis of the available literature showed that 22% of survivors smoked, 20% were binge drinkers, and 15% used drugs . In addition to risky behaviour, survivors tend to have an unhealthy lifestyle, with only 10% following a healthy lifestyle . A large number of cancer survivors are overweight (58%), eat less than the recommended five servings of fruits and vegetables per day (82%) or fail to do any sport activities (55%) . In the INAYA1 (“Improved Nutrition in AYAs”) trial, 74 and 22% of CAYAs had a moderate and bad nutritional behaviour, respectively . Similar results were shown in the INAYA2 trial with 66 and 14% having a moderate or bad nutrition behaviour (presentation DGHO 2018) . Additionally, 15% of CAYAs consume an excessive amount of salt (≥ 10 g per day). Both studies showed that only a few childhood cancer survivors met the nutrition recommendations of the German Nutrition Society (DGE) ( www.dge.de/10regeln ). Similar results were found in American childhood cancer survivors whose mean HEI-2010 was about 50% of the maximum score . Interestingly, long-term survivors (time from diagnosis ≥10 years) had a significantly lower HEI-2010 than recent survivors (time from diagnosis < 5 years) ( P = 0.047). CAYAs struggle to adhere to the consumption of green vegetables and beans, total vegetables and whole fruits. No survivor met the guidelines for dietary fibre and potassium intake and only a few met the guidelines for vitamin D, sodium, calcium, and saturated fat intake. The average for saturated fat and for sodium was 115 and 143% respectively . Another relevant factor of a healthy lifestyle is regular physical activity. Previous studies have shown, that CAYAs were insufficiently active compared with controls and had a low motor performance at the end of the acute treatment phase , with serious reductions in motor performance within two years after bone tumour treatment. The positive impact of physical activity on the risk for long-term sequelae, has been shown in a variety of retrospective studies, with very few focusing on CAYAs. In HSCT survivors, correlations between increased physical activity levels (endurance) and lower waist circumference, lower percent fat mass and greater insulin sensitivity were noted . A prevalent and distressing symptom in children and adolescents with cancer, and in those who have undergone HSCT, is fatigue. A multidisciplinary group of experts in paediatric oncology and fatigue, developed a clinical practice guideline for management of fatigue with the focus on physical activity, relaxation and mindfulness . A report from the CCSS noted that Hodgkin’s lymphoma survivors (median, age 31.2 years) regularly undergoing vigorous exercise (≥ 9 metabolic equivalent [MET] hours/week [h/wk]) had a significantly lower risk of treatment-related cardiovascular events than survivors not meeting the guidelines for vigorous intensity exercise. For survivors who reported ≥9 MET-h/wk., the cumulative incidence of any cardiovascular event was 5.2% at ten years from baseline. In comparison, the cumulative incidence for survivors who reported 0 MET-h/wk. had more than doubled to 12.2% . By analysing 15,450 adult cancer survivors (median, age 25.9 years) from the CCSS cohort, at 15 years from baseline, the increase in vigorous exercise over an eight-year period was associated with a significant reduction of 40% in the risk of all-cause mortality, when compared to the survivors who only maintained low levels of exercise (3 to 6 MET-h/wk) . Lifestyle interventions Improving lifestyle behaviour is key to reducing the risk for cardiovascular long-term toxicities in particular. Given the fact that a sedentary lifestyle, lack of physical activity and poor nutrition increase the risk factors for cardiovascular diseases , there is an unused opportunity to improve the young cancer survivors’ risk profile. Thus, several interventional trials focused on CAYAs have since been undertaken. The INAYA1 trial aimed to evaluate the feasibility and the impact of an intensified nutrition counselling programme targeted to the at risk subsection of CAYAs . Nutritional behaviour was improved in week 12 by intensified nutrition counselling and a good, moderate and bad nutritional intake was seen in 48, 52 and 0% of CAYAs, compared to 4, 74 and 22% at baseline, respectively. No clinically relevant improvement was seen in quality of life, Waist-Hip Ratio (WHR), Body Mass Index (BMI) and blood pressure. The consecutive INAYA2 trial was able to show a decrease of sodium intake. Despite the INAYA trials, there is still a lack of nutritional interventions for young cancer survivors. The Survivor Health and Resilience Education (SHARE) Program focused on bone health behaviours among adolescent survivors of childhood cancer (median, age 14.2 years). This intervention had a significant short-term impact at one-month follow-up. Compared with the control group, participants of the intervention group had higher milk consumption, calcium supplementation and dietary calcium intake . Another double-blind randomized controlled trial (median age 17 years) focusing on bone health of long-term survivors of childhood ALL used calcium and cholecalciferol supplementation (or a placebo). This trial came to the conclusion that cholecalciferol and calcium supplementation provided no additional benefit to nutritional counselling for improving lumbar spine bone mineral density among adolescent and young adult survivors of ALL . Regarding the physical activity of CAYAs, only a few randomized controlled trials with very small sample sizes exist so far. These studies can be classified in three main categories: home-based, web-based or supervised physical activity interventions. A home-based intervention with asymptomatic childhood acute lymphoblastic leukaemia survivors included a three-month exercise programme and reported an improved cardiac function, in terms of a significant improvement of the attenuated left ventricle diastolic function . Another home-based intervention where participants met physical activity guidelines and wore a motivational activity tracker over a six-month period led to an increased, but not statistically significant, moderate to vigorous physical activity and maximum oxygen uptake (VO 2 max ) . A similar intervention focusing on a ten-week home-based exercise programme with feedback from a pedometer, and supported by a counsellor, led to a significant decrease in fatigue and significant increase in daily physical activity (steps per day) . Online interventions focused on promoting health behaviour via email over a six-week period or using a physical activity website for 12 weeks . Though, these studies determined high feasibility and acceptability, physical activity levels did not change or increase significantly. A Facebook-based physical activity intervention over a three-month period increased moderate to vigorous physical activity and led to significant weight loss . Supervised interventions containing a physical activity-educational and/or exercise intervention in a group setting improved physical activity, quality of life and also cardiovascular, physical and metabolic outcomes of cardiovascular diseases . In our clinic we conducted the MAYA trial (Motivate AYA, presentation DGHO 2018, publication in process), where we randomly evaluated the effect of a structured intervention on physical activity and quality of life in CAYAs with cardiovascular risk factors. CAYAs of the intervention group increased the amount of vigorous intensity activity from baseline to week 12 and reduced the amount of time spent sitting. Psycho-oncological interventions Several behavioural intervention techniques are used to address mental distress in cancer survivors, including the transtheoretical Model (TTM), cognitive behavioural therapy (CBT) and motivational interviewing (MI). Current literature is still inconclusive as to which of these show the best effect . MI seems to be a promising approach as it targets patients that feel ambivalent about a certain behaviour, knowing on the one hand about the disadvantages, and on the other hand seeing the benefits of said behaviour. Therefore it is compatible with a variety of problems that CAYAs feel ambivalent about, such as classic health behaviours like smoking cessation, alcohol consumption, physical activity and nutrition. Although initially developed to address addiction, nowadays MI is widely used across the medical field to address a broader range of behaviours . MI uses reflective listening and a client-centred approach to help the patient explore their own motivation to change and their way of planning and realizing said changes. Further techniques used in MI are the expression of empathy, the development of discrepancies between the actual behaviour and the patients’ goals, the avoidance of confrontation within the therapeutic relationship and the enhancement of optimism and self-efficacy . Therefore, CAYA specific topics, like changing their way of coping with cancer, of dealing with fear of reoccurrence or of coping with fatigue symptoms may also be addressed using MI techniques, despite the fact that scientific evidence in this regard is sparse. The existing evidence regarding MI in cancer survivors seems promising: Spencer et al. included 15 studies using MI in cancer survivors in their systematic review. They concluded that MI techniques seem to be effective - besides impacting health behaviours like nutrition and activity, MI may decrease patient stress related to cancer and may enhance overall quality of life . Regarding fatigue and pain the evidence remains inconclusive. Survivorship programmes for CAYAs Follow-up care of CAYAs is challenging in itself, as it encompasses more than detection of cancer relapse programmes, which are necessary but so far rarely available 67% of CAYAs have no access to specialised CAYA aftercare . In the United States of America, patients post cancer are treated in survivorship clinics, but sadly there is no such centralized institution in Germany or Europe. Examples of prevention or support programmes for CAYAs in Germany include: OncoKids ( www.neu.onko-kids.de ), the Phönikks foundation ( www.phoenikks.de ), the Pancare network ( www.pancare.eu/en ), AYA parents ( www.khae.ovgu.de/SAYA.print ), JET trial ( www.uniklinikum-jena.de ), AYALE trial ( www.uniklinikum-leipzig.de ) and “Deutsche Stiftung für junge Erwachsene mit Krebs” ( www.junge-erwachsene-mit-krebs.de ). Programmes for young cancer survivors with the focus on lifestyle, health behaviour, particularly with regard to healthy diet and regular physical activity, are lacking. Also lacking are necessary conclusions, data about treating or preventing long-term effects, which are also heterogeneous and generally incomparable. There is a lack of randomized controlled trials dealing with the topic of our paper. A standardized follow-up care programme of CAYAs in Germany does not exist, especially with a focus on the long-term consequences of cancer survivorship. Based on the results of the aforementioned interventional trials, there is a dire need to establish a regular and comprehensive assessment, and related interventions, covering preventative lifestyle and psychological issues. This paper presents the first structured and randomized follow-up programme focusing on lifestyle and psychological consequences and appropriate interventions in CAYAs. The most commonly reported long-term toxicities in cancer survivors are cardiovascular diseases like cardiomyopathy, chronic heart failure or valvular disorder, which occur with a five to 15-fold frequency, and at an earlier age, when compared to the general population . The individual risk for the development of cardiovascular disease is determined by treatment related factors (e.g. type, mode of administration and cumulative dose of chemotherapy and/or chest radiotherapy), and non-treatment related factors like lifestyle (e.g. smoking) or relevant co-morbidities (e.g. dyslipoproteinaemia or hypertension). Chest-directed radiotherapy is associated with an increased risk of myocardial infarction, congestive heart failure, valvular heart disease, and arrhythmias . Anthracycline chemotherapy increases the risk of heart failure . CAYAs exposed to prior anthracycline-based treatment and chest radiation have the highest treatment-related risk for cardiovascular diseases. Thus, aftercare focusing not only on tumour relapse or second cancer, but also improving modifiable lifestyle risk factors is of particular importance. CAYAs are more often obese when compared to siblings, especially after hypothalamic injury due to resection, radiotherapy or high doses of corticosteroids (e.g. after brain cancer or acute lymphoblastic leukaemia (ALL) treatment) . High incidence rates of diabetes mellitus and insulin resistance (roughly 50%) are reported after allogeneic haematopoietic stem cell transplantation (HSCT) or abdominal radiotherapy for solid tumours . Up to one in five CAYAs have problems with decreased bone mineral density due to the direct impact of the cancer itself (e.g. leukaemia), corticosteroid treatment, osteotoxic chemo- and/or radiotherapy, treatment-induced endocrine disorders (e.g. growth hormone deficiency or hypogonadism), malnutrition, physical impairment or reduced muscle strength . These aforementioned long-term effects may influence the lifestyle of CAYAs and therefore increase the risks of long-term side effects like cardiovascular diseases. Due to the disturbance of the psychosocial development period during childhood, adolescence and young adulthood, CAYAs are particularly vulnerable to psychosocial problems . Although a cancer diagnosis clearly impacts the psychosocial situation at every age, the CAYA age is a critical period in life. Establishing identity, developing a sexual identity and a positive body image, as well as separating from parents, being around peers and (starting to) make decisions regarding career and employment, education and family are the typical concerns of young people transitioning from childhood to adulthood . Therefore, cancer and cancer-related issues (e.g. confrontation with mortality, changes in body image, dependence on parents, disruptions in social life and education / employment, loss of reproductive capacity) can be more stressful for cancer survivors than for healthy young adults . Therefore, compared to the general population, the risk for behavioural and educational problems are twice as high; and quality of life, mental well-being and life satisfaction are much lower in CAYAs with cancer . CAYAs often have difficulties with reintegration into school, work, education and everyday life that may lead to missing graduation and financial problems. Furthermore, not all cancer survivors are able to return back to work or school at all . About 72% of patients who were working, or in school, full-time before diagnosis returned to full-time work or school 15 to 35 months post diagnosis, but only 34% of previously part-time workers/students and 7% of homemakers returned back . In addition, young adult survivors of childhood allogeneic HSCT have high unemployment rates at all attained ages (18–22 (56%), 23–37 (53%) and 28–32 (68%) years) . When compared to the general population, CAYAs have more educational or other school problems (46% vs. 23%), including having to repeat a grade (21% vs. 9%) and developing a learning-disability (19% vs. 7%) or having to attend special-education programmes (20% vs. 8%) . CAYAs with central nervous system (CNS) tumours or leukaemia receiving CNS radiation are at a particularly high risk for problems at school . In addition, cancer history may influence social relationships and interactions. CAYAs tend to have less close friends (19% vs. 8%) and were less likely to use friends as confidants (58% vs. 67%) when compared to peers . Young adult cancer survivors are more likely to divorce or separate than same-age controls . Nearly 50% of CAYAs have reported financial distress, annual productivity loss, or debt accumulation due to treatment costs, or did not adhere to recommended prescription medication because of the uninsured costs . Furthermore, survivors of childhood cancer were at high risk for hospitalization, and spent an average of five times more days in hospital, when compared to controls . Major reasons for hospitalization among cancer survivors include diseases of the nervous system (19.1% of all excess hospitalizations), endocrine system (11.1%), digestive organs (10.5%) and respiratory system (10.0%) . Although CAYAs have faced a severe life threatening disease in their early years, up to 35.8% of survivors will develop a risky health behaviour (sexual behaviour, tobacco, alcohol, or illicit drugs) . However, data comparing the risky behaviour to siblings or the general population remain inconsistent. Some studies report that cancer survivors smoke, consume alcohol and use illicit drugs at rates lower than siblings , but other studies found no difference or increased risky health behaviour among AYA survivors of childhood cancer . A recent meta-analysis of the available literature showed that 22% of survivors smoked, 20% were binge drinkers, and 15% used drugs . In addition to risky behaviour, survivors tend to have an unhealthy lifestyle, with only 10% following a healthy lifestyle . A large number of cancer survivors are overweight (58%), eat less than the recommended five servings of fruits and vegetables per day (82%) or fail to do any sport activities (55%) . In the INAYA1 (“Improved Nutrition in AYAs”) trial, 74 and 22% of CAYAs had a moderate and bad nutritional behaviour, respectively . Similar results were shown in the INAYA2 trial with 66 and 14% having a moderate or bad nutrition behaviour (presentation DGHO 2018) . Additionally, 15% of CAYAs consume an excessive amount of salt (≥ 10 g per day). Both studies showed that only a few childhood cancer survivors met the nutrition recommendations of the German Nutrition Society (DGE) ( www.dge.de/10regeln ). Similar results were found in American childhood cancer survivors whose mean HEI-2010 was about 50% of the maximum score . Interestingly, long-term survivors (time from diagnosis ≥10 years) had a significantly lower HEI-2010 than recent survivors (time from diagnosis < 5 years) ( P = 0.047). CAYAs struggle to adhere to the consumption of green vegetables and beans, total vegetables and whole fruits. No survivor met the guidelines for dietary fibre and potassium intake and only a few met the guidelines for vitamin D, sodium, calcium, and saturated fat intake. The average for saturated fat and for sodium was 115 and 143% respectively . Another relevant factor of a healthy lifestyle is regular physical activity. Previous studies have shown, that CAYAs were insufficiently active compared with controls and had a low motor performance at the end of the acute treatment phase , with serious reductions in motor performance within two years after bone tumour treatment. The positive impact of physical activity on the risk for long-term sequelae, has been shown in a variety of retrospective studies, with very few focusing on CAYAs. In HSCT survivors, correlations between increased physical activity levels (endurance) and lower waist circumference, lower percent fat mass and greater insulin sensitivity were noted . A prevalent and distressing symptom in children and adolescents with cancer, and in those who have undergone HSCT, is fatigue. A multidisciplinary group of experts in paediatric oncology and fatigue, developed a clinical practice guideline for management of fatigue with the focus on physical activity, relaxation and mindfulness . A report from the CCSS noted that Hodgkin’s lymphoma survivors (median, age 31.2 years) regularly undergoing vigorous exercise (≥ 9 metabolic equivalent [MET] hours/week [h/wk]) had a significantly lower risk of treatment-related cardiovascular events than survivors not meeting the guidelines for vigorous intensity exercise. For survivors who reported ≥9 MET-h/wk., the cumulative incidence of any cardiovascular event was 5.2% at ten years from baseline. In comparison, the cumulative incidence for survivors who reported 0 MET-h/wk. had more than doubled to 12.2% . By analysing 15,450 adult cancer survivors (median, age 25.9 years) from the CCSS cohort, at 15 years from baseline, the increase in vigorous exercise over an eight-year period was associated with a significant reduction of 40% in the risk of all-cause mortality, when compared to the survivors who only maintained low levels of exercise (3 to 6 MET-h/wk) . Improving lifestyle behaviour is key to reducing the risk for cardiovascular long-term toxicities in particular. Given the fact that a sedentary lifestyle, lack of physical activity and poor nutrition increase the risk factors for cardiovascular diseases , there is an unused opportunity to improve the young cancer survivors’ risk profile. Thus, several interventional trials focused on CAYAs have since been undertaken. The INAYA1 trial aimed to evaluate the feasibility and the impact of an intensified nutrition counselling programme targeted to the at risk subsection of CAYAs . Nutritional behaviour was improved in week 12 by intensified nutrition counselling and a good, moderate and bad nutritional intake was seen in 48, 52 and 0% of CAYAs, compared to 4, 74 and 22% at baseline, respectively. No clinically relevant improvement was seen in quality of life, Waist-Hip Ratio (WHR), Body Mass Index (BMI) and blood pressure. The consecutive INAYA2 trial was able to show a decrease of sodium intake. Despite the INAYA trials, there is still a lack of nutritional interventions for young cancer survivors. The Survivor Health and Resilience Education (SHARE) Program focused on bone health behaviours among adolescent survivors of childhood cancer (median, age 14.2 years). This intervention had a significant short-term impact at one-month follow-up. Compared with the control group, participants of the intervention group had higher milk consumption, calcium supplementation and dietary calcium intake . Another double-blind randomized controlled trial (median age 17 years) focusing on bone health of long-term survivors of childhood ALL used calcium and cholecalciferol supplementation (or a placebo). This trial came to the conclusion that cholecalciferol and calcium supplementation provided no additional benefit to nutritional counselling for improving lumbar spine bone mineral density among adolescent and young adult survivors of ALL . Regarding the physical activity of CAYAs, only a few randomized controlled trials with very small sample sizes exist so far. These studies can be classified in three main categories: home-based, web-based or supervised physical activity interventions. A home-based intervention with asymptomatic childhood acute lymphoblastic leukaemia survivors included a three-month exercise programme and reported an improved cardiac function, in terms of a significant improvement of the attenuated left ventricle diastolic function . Another home-based intervention where participants met physical activity guidelines and wore a motivational activity tracker over a six-month period led to an increased, but not statistically significant, moderate to vigorous physical activity and maximum oxygen uptake (VO 2 max ) . A similar intervention focusing on a ten-week home-based exercise programme with feedback from a pedometer, and supported by a counsellor, led to a significant decrease in fatigue and significant increase in daily physical activity (steps per day) . Online interventions focused on promoting health behaviour via email over a six-week period or using a physical activity website for 12 weeks . Though, these studies determined high feasibility and acceptability, physical activity levels did not change or increase significantly. A Facebook-based physical activity intervention over a three-month period increased moderate to vigorous physical activity and led to significant weight loss . Supervised interventions containing a physical activity-educational and/or exercise intervention in a group setting improved physical activity, quality of life and also cardiovascular, physical and metabolic outcomes of cardiovascular diseases . In our clinic we conducted the MAYA trial (Motivate AYA, presentation DGHO 2018, publication in process), where we randomly evaluated the effect of a structured intervention on physical activity and quality of life in CAYAs with cardiovascular risk factors. CAYAs of the intervention group increased the amount of vigorous intensity activity from baseline to week 12 and reduced the amount of time spent sitting. Several behavioural intervention techniques are used to address mental distress in cancer survivors, including the transtheoretical Model (TTM), cognitive behavioural therapy (CBT) and motivational interviewing (MI). Current literature is still inconclusive as to which of these show the best effect . MI seems to be a promising approach as it targets patients that feel ambivalent about a certain behaviour, knowing on the one hand about the disadvantages, and on the other hand seeing the benefits of said behaviour. Therefore it is compatible with a variety of problems that CAYAs feel ambivalent about, such as classic health behaviours like smoking cessation, alcohol consumption, physical activity and nutrition. Although initially developed to address addiction, nowadays MI is widely used across the medical field to address a broader range of behaviours . MI uses reflective listening and a client-centred approach to help the patient explore their own motivation to change and their way of planning and realizing said changes. Further techniques used in MI are the expression of empathy, the development of discrepancies between the actual behaviour and the patients’ goals, the avoidance of confrontation within the therapeutic relationship and the enhancement of optimism and self-efficacy . Therefore, CAYA specific topics, like changing their way of coping with cancer, of dealing with fear of reoccurrence or of coping with fatigue symptoms may also be addressed using MI techniques, despite the fact that scientific evidence in this regard is sparse. The existing evidence regarding MI in cancer survivors seems promising: Spencer et al. included 15 studies using MI in cancer survivors in their systematic review. They concluded that MI techniques seem to be effective - besides impacting health behaviours like nutrition and activity, MI may decrease patient stress related to cancer and may enhance overall quality of life . Regarding fatigue and pain the evidence remains inconclusive. Follow-up care of CAYAs is challenging in itself, as it encompasses more than detection of cancer relapse programmes, which are necessary but so far rarely available 67% of CAYAs have no access to specialised CAYA aftercare . In the United States of America, patients post cancer are treated in survivorship clinics, but sadly there is no such centralized institution in Germany or Europe. Examples of prevention or support programmes for CAYAs in Germany include: OncoKids ( www.neu.onko-kids.de ), the Phönikks foundation ( www.phoenikks.de ), the Pancare network ( www.pancare.eu/en ), AYA parents ( www.khae.ovgu.de/SAYA.print ), JET trial ( www.uniklinikum-jena.de ), AYALE trial ( www.uniklinikum-leipzig.de ) and “Deutsche Stiftung für junge Erwachsene mit Krebs” ( www.junge-erwachsene-mit-krebs.de ). Programmes for young cancer survivors with the focus on lifestyle, health behaviour, particularly with regard to healthy diet and regular physical activity, are lacking. Also lacking are necessary conclusions, data about treating or preventing long-term effects, which are also heterogeneous and generally incomparable. There is a lack of randomized controlled trials dealing with the topic of our paper. A standardized follow-up care programme of CAYAs in Germany does not exist, especially with a focus on the long-term consequences of cancer survivorship. Based on the results of the aforementioned interventional trials, there is a dire need to establish a regular and comprehensive assessment, and related interventions, covering preventative lifestyle and psychological issues. This paper presents the first structured and randomized follow-up programme focusing on lifestyle and psychological consequences and appropriate interventions in CAYAs. Based on the physical, psychological and social long-term sequelae of CAYAs, the current literature and our experience in our survivorship clinic, we designed the CARE for CAYA-Program (CFC-P). This programme was designed to be an adjunct to the medical follow-up care, with the aim of assessing the needs of CAYA survivors and applying need-based interventions to prevent potential long-term sequelae. Thus the CFC-P includes annual comprehensive assessments to determine the individual need for a single, or several, preventative intervention(s) (high need) or no need for a preventative intervention (low need), followed by need-stratified modular interventions, currently including physical activity, nutrition, and psycho-oncology (Fig. ). The CFC-P was developed and is currently conducted in a consortium of 15 sites in Germany with established follow-up care clinics for CAYAs. The program is running and will be implemented in addition to a survivorship clinic run by medical doctors, who are focusing on medical issues regarding either cancer recurrence or medical long-term effects. Within these established structures no nutritional, physical activity or psycho-oncological support is integrated or reimbursed on a general basis yet, particularly not with preventive intention (not treating a pre-existing disorder). The CFC-P will be conducted within the framework of the innovation fund of the German Federal Joint Committee and thus aims to establish the efficacy of the programme with a randomized trial, followed by the implementation into the general care including the potential reimbursement of the interventions. Therefore, the programme will continue after completion of the randomization phase, and further evaluations regarding the assessment and the interventions will be conducted. Within the innovation fund, projects are limited to an overall duration of three years, thus the assessment of short-term effects was chosen to determine the efficacy of the programme. Need-stratified assessment The CFC-P includes a need-stratified assessment for the three modules: physical activity, nutrition, and psycho-oncology. The screening for need within the physical activity module is based on a questionnaire that was specially developed for the programme, because an adequate pre-existing questionnaire was not found that could be used for screening in this population. It includes questions regarding an average week within the past month: 1. On how many days in an average week have you been physically active at a moderate intensity? How long have you been physically active on these days? And 2. On how many days in an average week have you been physically active at a vigorous intensity? How long have you been physically active on these days? CAYAs who are less active than 150 min of moderate, or 75 min of vigorous intensity (or a combination of both intensities), or indicated activity on less than three days a week, are classified as having a need for an intervention. Within the nutrition module the CAYAs filled in a three-day dietary record (“Freiburger Ernährungsprotokoll”) providing data to calculate the “Healthy Eating Index- European Prospective Investigation into Cancer and Nutrition” (HEI-EPIC) . HEI-EPIC is an established instrument to evaluate dietary behaviour . In the study the validated German version of HEI, the HEI-EPIC was used . This instrument has been used within the INAYA trial and was considered as appropriate for this population . The HEI-EPIC distinguishes the following eight food groups: drinks, vegetables, fruits, cereals/potatoes, milk/dairy products, meat/sausages/fish/eggs, fats/oil and sweets/snacks. Based on a calculation described by Rüsten et al. 0–10 points for each group of food with up to 20 points for fruits, vegetables and drinks were calculated . The sum score ranges from 0 to 110 points. A sum score ≤ 40 points indicate a bad, > 40–64 points a moderate and ≥ 65 points a good dietary behaviour . CAYAs with a HEI EPIC score of ≤40 are in need of a nutrition intervention. For the modules physical activity and nutrition there are further criterions for a need defined, for example meeting the criteria for metabolic syndrome (Table ). For the psycho-oncology module, the needs assessment consists of the German version of the NCCN Distress Thermometer . It consists of a general scale scored from 0 to 10, as well as an additional problem list. As a score of five is internationally recognized as an indicator that a patient is distressed and needs support, this is also used as a cut-off for the psycho-oncology module. For a score of five in the Distress-Thermometer, Mehnert et al. found a sensitivity of up to 84% and a lower specifity of up to 47% when screening for moderate levels of anxiety or/and depression with the Hospital Anxiety and Depression Scale (HADS-D). The second screening instrument for this module is the German version of the Patient Health Questionnaire (PHQ-4) . The Chronbach’s α = 0.82 showed good internal consistency, and construct validity of the PHQ-4 was supported by intercorrelations with other self-reported scales . Modular interventions The three modules will be conducted by therapeutic personnel (e.g. sport scientists, physiotherapists, dieticians or nutrition scientists, psycho-oncologists) and follow a stringent interview guide. For every module a comprehensive manual was formulated, which was applied in each CFC site. Also, the personnel in every site was trained at the beginning of the programme and participated in regular telephone conferences. The physical activity module includes five consultation hours within six months. The intention of the consultation is to motivate the CAYAs to increase their physical activity, especially that of vigorous intensity. Based on the TTM, individual objectives will be determined and possible barriers for not being active will be identified . In addition to the five consultations, the participants receive newsletters with general information about physical activity, and also individual newsletters. The nutritional counselling includes five consultation hours within six months. The consultations are based on the standardized German nutrition care process including nutrition assessment, nutrition diagnosis, nutrition intervention and nutrition monitoring and evaluation . The nutritionist is giving individual advice for a healthy diet to prevent a relapse and is helping the CAYAs to identify the barriers which prevent them from eating healthy and how to overcome these. In addition to the five consultations the CAYAs receive general and individual newsletters and are invited to a shopping training and a cooking class in order to support a healthy diet. The psycho-oncology module includes five sessions of MI, on an approximately biweekly schedule. MI is a patient centred and guided approach of therapeutic communication with the goal of enhancing a person’s self-motivation in order to reach their goals by changing their behaviour. In the initial session the patient and therapist will select a focus for the next sessions. The sessions take 50 min and will be run by a certified psycho-oncologist, trained in MI. Regular telephone supervision will be provided by a senior psycho-oncologist and a certified M.I. trainer. Hypotheses There are two primary hypotheses of the CFC-P, one focused on evaluating the interventions themselves and one focused on evaluating the assessment process. In this respect, it is expected that the adaptive interventions of the CFC-P will improve the lifestyle (nutrition and/or physical activity) and/or the psychological situation of the participants. Additionally, the evaluation and the adaptation of the annual assessment schedule will improve the coverage of unmet needs of CAYAs. Secondarily, the CFC-P should prove to be a feasible and cost-effective programme, as it utilises an adequate and effective needs-adapted participant allocation scheme. This, in combination with the effective interventions, will improve the cardiovascular risk profile and quality of life of CAYAs. Endpoints Primary Endpoint of the CFC-P Rate of CAYAs with need for intervention after 12 months (rate in %, defined as CAYAs with need for intervention/ all in the trial included CAYAs) in comparison to the intervention and control groups in the randomized study part). Co-primary Endpoint of the CFC-P Rate of CAYAs with unmet needs that are outside of the scope of the assessment (comparison of initial assessment and adapted assessment). Secondary endpoints of the CFC-P Feasibility (recruitment, completion of assessments, adherence to and dropout rates of the overall programme and the respective interventions) Cost effectiveness (secondary health care costs, health care utilizations) Allocation and efficacy of modular interventions (difference in the individual need, cardiovascular risk factors and quality of life or fatigue at 12 months, in relation to the initial assessment and the participation in an interventional module Additionally, the intervention modules will be assessed separately by applying specific endpoints for each module to assess the efficacy of the respective intervention after 12 months. To detect potential short-term effects, which may attenuate over time, an additional assessment will be performed after four months. These endpoints include changes in the respective questionnaires or in the objective parameters (e.g. BMI, phase angle in bioelectrical impedance analysis or spiroergometry). Inclusion criteria Patients between the age of 15 and 39 years who received treatment for their cancer as a CAYA and are tumour free and in follow-up care will be included. Course of the programme (Fig. flow chart) At baseline (and subsequently on an annual basis) the current medical, psychosocial situation and lifestyle will be assessed from all included CAYAs. The assessment will be completed using validated questionnaires (e.g. EORTC QLQ C30, NCCN distress, PHQ-4, BSA, HEI-EPIC) and objective parameters (e.g. BMI, WHR, hyperlipidaemia, hypertension, diabetes). All participants will receive a psychological and lifestyle consultation immediately after the assessment as basic care. Based on their individual needs, CAYAs with low needs will be reassessed after one year, whereas those with high needs will be allocated to a single, or several, preventative interventions (module) as needed (Table ). The assessment will be repeated annually, and further preventative interventions may be applied. In the initial randomized phase, CAYAs with high needs will be randomized between preventative modular interventions (nutrition, physical activity and/or psycho-oncology) over a 12-months period, or basic care (waiting list, option to participate in the second year). Every 12 months all CAYAs will get a tablet-based screening form including the following validated and objective questionnaires: NCCN Distress Thermometer (DT) EORTC QLQ-C30 3-day dietary record (Freiburger Ernährungsprotokoll) Modified Physical Activity, Exercise and Sport Questionnaire (BSA) supplemented with the Borg-Scale Patient Health Questionnaire (PHQ-4) Questions about unmet needs Modified questionnaire about satisfaction (ZUF-8) Measure of health status (EQ. 5D-5 L) Questionnaire about schools, work Questionnaire about loss of working hours Short questionnaire about of use medical services SCNS-TF-9 Depending on the answers all patients will be classified in two groups. Group one will be those patients with a high need for intervention in at least one module and group two are patients without need for intervention. Criteria for high need for intervention are set separately for each module and can be found listed in Table . Randomization and blinding The annual comprehensive assessment will be performed after study inclusion by the responsible study personnel in each site for the three modules (physical activity, nutrition and psycho-oncology). The wearable activity monitoring over one week (ActiGraph) will be evaluated electronically and the bio impedance analysis (BIA) will be performed with standardized criteria to avoid any bias. When a high need in one of the modules is detected, a facsimile request for randomization will be send to the consortium leader. The 1:1 randomization is performed by authorized study personnel of the University Medical Center Hamburg-Eppendorf for each site using a blinded computer-generated randomization-list to either the intervention or the control group. The result of the randomization will be documented and send back to the site via facsimile. To achieve a rapid response and ensure smooth communication, a telephone is configured especially for the randomization procedure within the CFC-P. Ethics All local ethics committees in the consortium approved the study protocol. The leading ethics committee is the Hamburg Medical Chamber. Local ethics committees are the „Ethikkommission an der Medizinischen Fakultät der Rheinischen Friedrich-Wilhelms-Universität Bonn “belonging to University Hospital Bonn, „Ethikkommission der Friedrich-Alexander-Universität Erlangen-Nürnberg “belonging to University Hospital Erlangen, „Ethik-Kommission der Medizinischen Fakultät der Universität Duisburg-Essen" belonging to University Hospital Essen, „Ethik-Kommission der Albert-Ludwigs-Universität Freiburg “belonging to Medical Center University of Freiburg, „Ethik-Kommission der Medizinischen Hochschule Hannover “belonging to Hannover Medical School, „Ethik-Kommission der Friedrich-Schiller-Universität Jena “belonging to University Hospital Jena, „Ethikkommission der Universität zu Lübeck” belonging to University Hospital of Schleswig-Holstein, Campus Lübeck, “Ethik-Kommission der Otto-von-Guericke-Universität an der Medizinischen Fakultät und am Universitätsklinikum Magdeburg A. ö. R. “belonging to Medical Faculty University Hospital Magdeburg, „Ethikkommission der Landesärztekammer Rheinland-Pfalz K.d.ö.R. “belonging to Mainz University Medical Center, „Ethik-Kommission der Ärztekammer Westfalen-Lippe und der Westfälischen Wilhelms-Universität Münster “belonging to University Children’s Hospital Münster, „Ethikkommission an der Medizinischen Fakultät der Universität Rostock “belonging to University Hospital Rostock, „Ethik-Kommission bei der Landesärztekammer Baden-Württemberg “belonging to Olgahospital Stuttgart and „Ethik-Kommission bei der Medizinischen Fakultät der Universität Würzburg, Institut für Pharmakologie “belonging to University Hospital Würzburg. The study is conducted in accordance with the Declaration of Helsinki, Good Clinical Practice guidelines, including data and patient’s privacy protection. All participants provide written informed consent. The CFC-P was registered on 19th January 2018 prospectively and received the ID DRKS00012504. Recruitment has started in January 2018. Statistical methods All analyses will be performed in accordance with the intention-to-treat principle. The first primary endpoint “Rate of CAYAs with need for intervention after 12 months” will be compared using a likelihood-ratio Chi 2 test. The co-primary endpoint “Rate of CAYAs with needs not yet covered in the assessment” will only be tested if the null-hypothesis for the first primary endpoint is rejected (hierarchical testing). The closed testing procedure of Lehmacher et al. will be applied . Effects will be reported as absolute and relative risk changes with 95% confidence intervals. Sample size calculation Primary endpoint (rate of CAYAs with need for intervention after 12 months). Within the group with high needs it is expected that basic care will reduce the need for interventions by 10%. The need-stratified interventions of the CFC-P should reduce the need for interventions by additional 15 to 75% after 12 months. Using the likelihood ratio Chi 2 test and taking into account an alpha value of 5% and a beta error of 10%, 242 CAYAs must complete the 12-month evaluations. Considering a drop-out rate of approximately 30%, a total of 350 CAYAs with initial high needs will be 1:1 randomized to basic care or need-based interventions. It is expected that about 60% of CAYAs will have needs that require intervention, thus 530 CAYAs have to be recruited for the randomized phase. The programme will continue afterwards, and it is planned to include overall 1500 participants in this three-year time frame. Consent Patients deemed eligible for entry into the study will be provided with a verbal and written explanation of the study. After adequate time has been given, and all queries have been addressed and the clinical team is confident that the patient understands the study, patients will be asked to consent to the study. The written declaration of consent of minor participants (under 18) needs to be signed a parent or guardian. Data collection and confidentiality Confidentiality (with regard to the Federal Data Protection Act) of all patient-related data is ensured as all data will be pseudonymously (encrypted) stored and evaluated. A separate log relating original patient data with its respective, encrypted data will be created and appropriately secured by password and only authorized study personnel will be granted access to this file. Each investigator must ensure the patients’ confidentiality is maintained. Information and measurements of the study participants collected during the study will be recorded and stored separately from the personal information. Immediately after data collection, the data will be pseudonymously stored via the patient ID. All collected data will remain in secured locations and servers. The written and documented personal data, as well as the illness or health information, will be sealed and stored separately from each other. Access to data The responsible investigators commit to archive all documents of the study for 15 years after the completion of the study. The CFC-P includes a need-stratified assessment for the three modules: physical activity, nutrition, and psycho-oncology. The screening for need within the physical activity module is based on a questionnaire that was specially developed for the programme, because an adequate pre-existing questionnaire was not found that could be used for screening in this population. It includes questions regarding an average week within the past month: 1. On how many days in an average week have you been physically active at a moderate intensity? How long have you been physically active on these days? And 2. On how many days in an average week have you been physically active at a vigorous intensity? How long have you been physically active on these days? CAYAs who are less active than 150 min of moderate, or 75 min of vigorous intensity (or a combination of both intensities), or indicated activity on less than three days a week, are classified as having a need for an intervention. Within the nutrition module the CAYAs filled in a three-day dietary record (“Freiburger Ernährungsprotokoll”) providing data to calculate the “Healthy Eating Index- European Prospective Investigation into Cancer and Nutrition” (HEI-EPIC) . HEI-EPIC is an established instrument to evaluate dietary behaviour . In the study the validated German version of HEI, the HEI-EPIC was used . This instrument has been used within the INAYA trial and was considered as appropriate for this population . The HEI-EPIC distinguishes the following eight food groups: drinks, vegetables, fruits, cereals/potatoes, milk/dairy products, meat/sausages/fish/eggs, fats/oil and sweets/snacks. Based on a calculation described by Rüsten et al. 0–10 points for each group of food with up to 20 points for fruits, vegetables and drinks were calculated . The sum score ranges from 0 to 110 points. A sum score ≤ 40 points indicate a bad, > 40–64 points a moderate and ≥ 65 points a good dietary behaviour . CAYAs with a HEI EPIC score of ≤40 are in need of a nutrition intervention. For the modules physical activity and nutrition there are further criterions for a need defined, for example meeting the criteria for metabolic syndrome (Table ). For the psycho-oncology module, the needs assessment consists of the German version of the NCCN Distress Thermometer . It consists of a general scale scored from 0 to 10, as well as an additional problem list. As a score of five is internationally recognized as an indicator that a patient is distressed and needs support, this is also used as a cut-off for the psycho-oncology module. For a score of five in the Distress-Thermometer, Mehnert et al. found a sensitivity of up to 84% and a lower specifity of up to 47% when screening for moderate levels of anxiety or/and depression with the Hospital Anxiety and Depression Scale (HADS-D). The second screening instrument for this module is the German version of the Patient Health Questionnaire (PHQ-4) . The Chronbach’s α = 0.82 showed good internal consistency, and construct validity of the PHQ-4 was supported by intercorrelations with other self-reported scales . The three modules will be conducted by therapeutic personnel (e.g. sport scientists, physiotherapists, dieticians or nutrition scientists, psycho-oncologists) and follow a stringent interview guide. For every module a comprehensive manual was formulated, which was applied in each CFC site. Also, the personnel in every site was trained at the beginning of the programme and participated in regular telephone conferences. The physical activity module includes five consultation hours within six months. The intention of the consultation is to motivate the CAYAs to increase their physical activity, especially that of vigorous intensity. Based on the TTM, individual objectives will be determined and possible barriers for not being active will be identified . In addition to the five consultations, the participants receive newsletters with general information about physical activity, and also individual newsletters. The nutritional counselling includes five consultation hours within six months. The consultations are based on the standardized German nutrition care process including nutrition assessment, nutrition diagnosis, nutrition intervention and nutrition monitoring and evaluation . The nutritionist is giving individual advice for a healthy diet to prevent a relapse and is helping the CAYAs to identify the barriers which prevent them from eating healthy and how to overcome these. In addition to the five consultations the CAYAs receive general and individual newsletters and are invited to a shopping training and a cooking class in order to support a healthy diet. The psycho-oncology module includes five sessions of MI, on an approximately biweekly schedule. MI is a patient centred and guided approach of therapeutic communication with the goal of enhancing a person’s self-motivation in order to reach their goals by changing their behaviour. In the initial session the patient and therapist will select a focus for the next sessions. The sessions take 50 min and will be run by a certified psycho-oncologist, trained in MI. Regular telephone supervision will be provided by a senior psycho-oncologist and a certified M.I. trainer. There are two primary hypotheses of the CFC-P, one focused on evaluating the interventions themselves and one focused on evaluating the assessment process. In this respect, it is expected that the adaptive interventions of the CFC-P will improve the lifestyle (nutrition and/or physical activity) and/or the psychological situation of the participants. Additionally, the evaluation and the adaptation of the annual assessment schedule will improve the coverage of unmet needs of CAYAs. Secondarily, the CFC-P should prove to be a feasible and cost-effective programme, as it utilises an adequate and effective needs-adapted participant allocation scheme. This, in combination with the effective interventions, will improve the cardiovascular risk profile and quality of life of CAYAs. Primary Endpoint of the CFC-P Rate of CAYAs with need for intervention after 12 months (rate in %, defined as CAYAs with need for intervention/ all in the trial included CAYAs) in comparison to the intervention and control groups in the randomized study part). Co-primary Endpoint of the CFC-P Rate of CAYAs with unmet needs that are outside of the scope of the assessment (comparison of initial assessment and adapted assessment). Secondary endpoints of the CFC-P Feasibility (recruitment, completion of assessments, adherence to and dropout rates of the overall programme and the respective interventions) Cost effectiveness (secondary health care costs, health care utilizations) Allocation and efficacy of modular interventions (difference in the individual need, cardiovascular risk factors and quality of life or fatigue at 12 months, in relation to the initial assessment and the participation in an interventional module Additionally, the intervention modules will be assessed separately by applying specific endpoints for each module to assess the efficacy of the respective intervention after 12 months. To detect potential short-term effects, which may attenuate over time, an additional assessment will be performed after four months. These endpoints include changes in the respective questionnaires or in the objective parameters (e.g. BMI, phase angle in bioelectrical impedance analysis or spiroergometry). Patients between the age of 15 and 39 years who received treatment for their cancer as a CAYA and are tumour free and in follow-up care will be included. flow chart) At baseline (and subsequently on an annual basis) the current medical, psychosocial situation and lifestyle will be assessed from all included CAYAs. The assessment will be completed using validated questionnaires (e.g. EORTC QLQ C30, NCCN distress, PHQ-4, BSA, HEI-EPIC) and objective parameters (e.g. BMI, WHR, hyperlipidaemia, hypertension, diabetes). All participants will receive a psychological and lifestyle consultation immediately after the assessment as basic care. Based on their individual needs, CAYAs with low needs will be reassessed after one year, whereas those with high needs will be allocated to a single, or several, preventative interventions (module) as needed (Table ). The assessment will be repeated annually, and further preventative interventions may be applied. In the initial randomized phase, CAYAs with high needs will be randomized between preventative modular interventions (nutrition, physical activity and/or psycho-oncology) over a 12-months period, or basic care (waiting list, option to participate in the second year). Every 12 months all CAYAs will get a tablet-based screening form including the following validated and objective questionnaires: NCCN Distress Thermometer (DT) EORTC QLQ-C30 3-day dietary record (Freiburger Ernährungsprotokoll) Modified Physical Activity, Exercise and Sport Questionnaire (BSA) supplemented with the Borg-Scale Patient Health Questionnaire (PHQ-4) Questions about unmet needs Modified questionnaire about satisfaction (ZUF-8) Measure of health status (EQ. 5D-5 L) Questionnaire about schools, work Questionnaire about loss of working hours Short questionnaire about of use medical services SCNS-TF-9 Depending on the answers all patients will be classified in two groups. Group one will be those patients with a high need for intervention in at least one module and group two are patients without need for intervention. Criteria for high need for intervention are set separately for each module and can be found listed in Table . The annual comprehensive assessment will be performed after study inclusion by the responsible study personnel in each site for the three modules (physical activity, nutrition and psycho-oncology). The wearable activity monitoring over one week (ActiGraph) will be evaluated electronically and the bio impedance analysis (BIA) will be performed with standardized criteria to avoid any bias. When a high need in one of the modules is detected, a facsimile request for randomization will be send to the consortium leader. The 1:1 randomization is performed by authorized study personnel of the University Medical Center Hamburg-Eppendorf for each site using a blinded computer-generated randomization-list to either the intervention or the control group. The result of the randomization will be documented and send back to the site via facsimile. To achieve a rapid response and ensure smooth communication, a telephone is configured especially for the randomization procedure within the CFC-P. All local ethics committees in the consortium approved the study protocol. The leading ethics committee is the Hamburg Medical Chamber. Local ethics committees are the „Ethikkommission an der Medizinischen Fakultät der Rheinischen Friedrich-Wilhelms-Universität Bonn “belonging to University Hospital Bonn, „Ethikkommission der Friedrich-Alexander-Universität Erlangen-Nürnberg “belonging to University Hospital Erlangen, „Ethik-Kommission der Medizinischen Fakultät der Universität Duisburg-Essen" belonging to University Hospital Essen, „Ethik-Kommission der Albert-Ludwigs-Universität Freiburg “belonging to Medical Center University of Freiburg, „Ethik-Kommission der Medizinischen Hochschule Hannover “belonging to Hannover Medical School, „Ethik-Kommission der Friedrich-Schiller-Universität Jena “belonging to University Hospital Jena, „Ethikkommission der Universität zu Lübeck” belonging to University Hospital of Schleswig-Holstein, Campus Lübeck, “Ethik-Kommission der Otto-von-Guericke-Universität an der Medizinischen Fakultät und am Universitätsklinikum Magdeburg A. ö. R. “belonging to Medical Faculty University Hospital Magdeburg, „Ethikkommission der Landesärztekammer Rheinland-Pfalz K.d.ö.R. “belonging to Mainz University Medical Center, „Ethik-Kommission der Ärztekammer Westfalen-Lippe und der Westfälischen Wilhelms-Universität Münster “belonging to University Children’s Hospital Münster, „Ethikkommission an der Medizinischen Fakultät der Universität Rostock “belonging to University Hospital Rostock, „Ethik-Kommission bei der Landesärztekammer Baden-Württemberg “belonging to Olgahospital Stuttgart and „Ethik-Kommission bei der Medizinischen Fakultät der Universität Würzburg, Institut für Pharmakologie “belonging to University Hospital Würzburg. The study is conducted in accordance with the Declaration of Helsinki, Good Clinical Practice guidelines, including data and patient’s privacy protection. All participants provide written informed consent. The CFC-P was registered on 19th January 2018 prospectively and received the ID DRKS00012504. Recruitment has started in January 2018. All analyses will be performed in accordance with the intention-to-treat principle. The first primary endpoint “Rate of CAYAs with need for intervention after 12 months” will be compared using a likelihood-ratio Chi 2 test. The co-primary endpoint “Rate of CAYAs with needs not yet covered in the assessment” will only be tested if the null-hypothesis for the first primary endpoint is rejected (hierarchical testing). The closed testing procedure of Lehmacher et al. will be applied . Effects will be reported as absolute and relative risk changes with 95% confidence intervals. Primary endpoint (rate of CAYAs with need for intervention after 12 months). Within the group with high needs it is expected that basic care will reduce the need for interventions by 10%. The need-stratified interventions of the CFC-P should reduce the need for interventions by additional 15 to 75% after 12 months. Using the likelihood ratio Chi 2 test and taking into account an alpha value of 5% and a beta error of 10%, 242 CAYAs must complete the 12-month evaluations. Considering a drop-out rate of approximately 30%, a total of 350 CAYAs with initial high needs will be 1:1 randomized to basic care or need-based interventions. It is expected that about 60% of CAYAs will have needs that require intervention, thus 530 CAYAs have to be recruited for the randomized phase. The programme will continue afterwards, and it is planned to include overall 1500 participants in this three-year time frame. Patients deemed eligible for entry into the study will be provided with a verbal and written explanation of the study. After adequate time has been given, and all queries have been addressed and the clinical team is confident that the patient understands the study, patients will be asked to consent to the study. The written declaration of consent of minor participants (under 18) needs to be signed a parent or guardian. Confidentiality (with regard to the Federal Data Protection Act) of all patient-related data is ensured as all data will be pseudonymously (encrypted) stored and evaluated. A separate log relating original patient data with its respective, encrypted data will be created and appropriately secured by password and only authorized study personnel will be granted access to this file. Each investigator must ensure the patients’ confidentiality is maintained. Information and measurements of the study participants collected during the study will be recorded and stored separately from the personal information. Immediately after data collection, the data will be pseudonymously stored via the patient ID. All collected data will remain in secured locations and servers. The written and documented personal data, as well as the illness or health information, will be sealed and stored separately from each other. The responsible investigators commit to archive all documents of the study for 15 years after the completion of the study. Multimodal cancer treatment, including surgery, radiotherapy, chemotherapy, immunotherapy, allogeneic HSCT and/or endocrine or targeted therapy can result in relevant long-term sequelae. CAYAs face significant, partly severe and at times life-threatening late effects that can affect different organs e.g. endocrine system, heart, bones, cognitive and neurological system and can cause secondary malignancies. Furthermore, CAYAs have a high rate of unmet psychosocial needs that are currently neither regularly assessed nor cared for . Although CAYAs have faced a severe life-threatening disease in the early years of their lives, one third of survivors have a risky health behaviour and an unhealthy lifestyle . CAYAs have a moderate to poor nutritional behaviour and are insufficiently active compared to controls. Therefore, improving the lifestyle behaviour of CAYAs is important to reduce the risk for cardiovascular long-term toxicities in particular. Individualized exercise and nutrition interventions to promote physical activity and a healthy diet are needed after cancer treatment in order to enhance the lifestyle of CAYAs. So far only a few randomized intervention trials have examined the physical activity or nutritional behaviour of CAYAs. Supervised interventions containing a physical activity-educational and/or exercise intervention in a group setting improved physical activity, quality of life, cardiovascular, physical and metabolic outcomes of cardiovascular diseases . Interventions focusing on physical activity or healthy diet of young cancer survivors are practical, feasible and generally well accepted by the participants . This randomized controlled multicentre trial will use a complex approach with the focus on three module-interventions: physical activity, nutrition and psycho-oncology. All interventions are supported by diverse tools, such as individual counselling, wearable activity monitoring, bio impedance analysis, training and cooking classes, regular newsletters about healthy lifestyle, also, optionally, an anamnesis of smell and taste and spiroergometry. The counselling about physical activity and/or nutrition will focus on overcoming CAYAs barriers for healthy behaviour. The CFC-P is the first randomized trial with young cancer survivors to apply motivational interviewing (ono-to-one sessions) within the psycho-oncology module. The results of this study will show whether the targeted interventions can reduce the rate of CAYAs with unmet needs at 12 month, the feasibility of a comprehensive lifestyle survivorship programme and the efficacy of modular interventions e.g. the individual need, cardiovascular risk factors and quality of life or fatigue at 12 months in relation to the initial assessment. In conclusion, comprehensive cancer care has to include more than medical tumour follow-up, particularly in CAYAs. Clinicians should be aware of this vulnerable group of patients for better detection, prevention, and management of treatment-induced late effects. Follow-up care should be undertaken by a team of specialists with different disciplines, including paediatrics and medical oncologists, psycho-oncologists, endocrinologists, cardiologists, social workers, specialists for nutrition, sport scientists and others. Besides the treatment of any side effects, regular assessment and detection of early signs of potential problems or disorders and related preventative interventions should be one of the main issues in follow-up care. Thus, the CFC-P was designed to establish a follow-up care programme for CAYAs at 15 large sites in Germany, to be implemented at further sites upon demonstration of the efficacy of the programme. Results of the CFC-P are expected by the end of 2020. During the final phase of the programme the results will be evaluated and discussed with health care insurances to ensure continuation of the programme within the standard of care. The two major health care insurances in Germany (AOK Rheinland/Hamburg and TK) are partners of the programme and all interventions were developed based on future standard of care accounting.
BRCA1‐associated‐protein‐1 inactivated melanocytic tumours: characterisation of the clinicopathological spectrum and immunohistochemical expression pattern of preferentially expressed antigen in melanoma
aa92661c-37bb-49b8-bcf2-6836c2332338
11649512
Anatomy[mh]
BAP1‐inactivated melanocytic tumours (BIMT), also referred to as BAP1‐inactivated melanocytomas, present clinically as skin‐coloured, dome‐shaped papules typically effecting trunk, head and neck area and extremities. Histologically, they are dermally based and show an epithelioid cell morphology with varying degrees of cytological atypia. BAP1 stands for BRCA1‐associated protein 1 and is encoded on chromosome 3 (locus 3q21.1). It functions as a tumour suppressor gene that is implicated in DNA damage response, transcriptional regulation and chromatin modulation. Germline mutations in BAP1 result in an autosomal dominant tumour predisposition syndrome that is associated with a high risk of developing various tumours, including BIMT, cutaneous melanoma, uveal melanoma, mesothelioma, renal cell carcinoma, lung adenocarcinoma and meningioma. BIMT are often the first symptom of the tumour predisposition syndrome, and histopathological recognition is therefore important to guide patient management. The presence of varying degrees of cytological atypia in BIMT poses a diagnostic challenge to distinguish the entity from atypical Spitz tumours and melanoma. , PRAME (preferentially expressed antigen in melanoma) is a cancer testis antigen that has demonstrated valuable utility in assisting in differentiating melanoma from benign counterparts, given its high specificity of diffuse expression in melanoma and absence to low expression in benign melanocytic naevi. , The PRAME expression profile has not been comprehensively investigated in BIMT. The aims of this study were to (1) describe the clinical and histopathological spectrum of BIMT in a large patient cohort in Southern Alberta, Canada, (2) study the behaviour of BIMT by providing long‐term follow‐up and (3) study the expression pattern of PRAME in BIMT. Ethical approval was obtained from the health research ethics board of Alberta (HREBA.CC‐19‐0379). Haematoxylin and eosin‐stained sections of 65 BAP1‐inactivated melanocytic tumours were retrieved from the departmental files of the Alberta Precision Laboratories, Calgary, Alberta, Canada. BAP1 inactivation was defined as complete loss of nuclear staining of the immunohistochemical marker BAP1. The histological features were reviewed and the following histopathological criteria were documented: tumour circumscription, junctional and dermal component, pigmentation, stromal fibrosis, cytological atypia (mild, moderate, severe), mitotic activity, necrosis, infiltrative growth, inflammatory infiltrate and existence of a conventional melanocytic in the background. Clinical data, including genetic test results for BAP1 germline mutations and follow‐up, were obtained from patient records. All patient records were reviewed for additional cancer diagnoses and additional skin excisions. Immunohistochemistry with adequate controls was performed for S‐100, Sox10, MelanA, HMB45, p16, BAP1 and Ki‐67, according to the manufacturer's instructions (Table ). In a subset of cases, immunohistochemical staining had been performed for routine work‐up, and in these cases the slides were reviewed without repeating the stains. Immunohistochemistry for PRAME was performed on 4‐μm‐thick formalin‐fixed paraffin‐embedded whole tissue sections following pressure cooker antigen retrieval (Target Retrieval Solution; pH 6.1 citrate buffer; Dako, Carpinteria, CA, USA) using a rabbit anti‐PRAME monoclonal antibody (1:100 dilution; clone EPR20330; Biocare Medical, Pacheco, CA, USA); the Novolink polymer detection system (Leica, Buffalo Grove, IL, USA) was used. Nuclear staining was assessed and given as the percentage of overall tumour cells (0%: 0; 1–25%: 1, 26–50%: 2; 51–75%: 3, 76–100%: 4) and staining intensity (ranging from 0 to 4 as follows: absent: 0; weak: 1; moderate: 2; strong: 3). A combined score was calculated as the sum of the quantity and staining intensity scores. The normal peritumoural tissue served as positive control. The source of the antibodies and their dilutions are listed in Table . Clinical features Our histopathological archives for BAP1‐inactivated tumours were searched from 2010 to 2022. The Calgary health zone includes an average of 1 500 000 people. Sixty‐five BIMT involving 31 female and seven male patients (ratio f:m = 4.4:1) were included in the study. With 38 patients affected by BIMT the estimated prevalence of BIMT in the Calgary health zone is 0.000025. With seven patients carrying a BAP1 germline mutation the prevalence of BAP1 germline mutations presenting with BIMT is approximately 0.0000047 in this cohort. The patient age ranged from 16 to 77 years with a mean of 39.6 years; two patients were younger than 18 years. All tumours were completely excised by primary excision or re‐excision. Seven patients (18.4%) had a BAP1 germline mutation. These patients presented at a younger age (range = 16–66, median = 25 years) without sex predilection (four females, three males). The results of the genetic analyses with documentation of the specific mutations were available for six patients and included the following: c.376‐2A>G, c.376‐2_392del, c.1717delCp.(Leu573TrpfsTer3), c.458_549delCT, c.1358_1359del and c.485_495delCT. The majority of BIMT were located on the trunk ( n = 26, 43%, including 15 BIMT located on the back, three on the shoulder, four on the chest and four on the abdomen) and on the head and neck area ( n = 26, 43%, 11 BIMTs on the face, seven on the ears, five on the neck and three located on the scalp). The remaining tumours were located on the extremities ( n = 13, seven located on the upper extremities, six on the lower extremities including acral sites). The tumours presented with a median size of 0.55 cm (range = 0.2–1.5 cm, available for n = 22). Patients with BAP1 germline mutations presented frequently with multiple BIMT (range of number of BIMT per patient 1–8, mean 6). The anatomical distribution did not differ significantly between germline‐associated and sporadic tumours, nor did the size. No recurrences or metastases of BIMT were noted in the entire cohort (follow‐up period = 4–111 months, mean = 44 months). One male patient with BAP1 germline mutation died of complications of mesothelioma at the age of 69 years, 42 months after the diagnosis of one BIMT. This patient's history is also remarkable for two basal cell carcinomas and an invasive melanoma that the patient developed 3 years prior to his mesothelioma. The melanoma (size 1.5 × 1.4 cm) was amelanotic, showed spindle cell morphology and a maximum tumour thickness of 2.1 cm. None of the remaining patients with BAP1 germline mutations developed a malignant tumour diagnosis during the follow‐up period (follow‐up period range = 8–111 months, mean = 49 months). According to the medical records, 26 patients also had a history of conventional melanocytic naevi unrelated to their BIMT(s). The number of conventional melanocytic naevi ranged from one to 28 per patient. Within the group of patients with sporadic BIMT, the mean number of additional conventional melanocytic naevi was four per patient. Within the group of patients with BAP1 germline mutations, the mean number of additional conventional naevi was 5.5. Within the group of patients with sporadic BIMT (patient n = 31), one patient had a remote history of melanoma (no histopathological data available), two had a history of basal cell carcinoma, one patient had diffuse large B cell lymphoma, one patient revealed a history of Hodgkin lymphoma plus a history of ductal carcinoma in‐situ of the breast, one patient developed invasive ductal carcinoma of the breast and one patient had a remote history of prostatic adenocarcinoma plus a remote history of melanoma. Two patients with sporadic BIMT harboured BRCA germline mutations (one BRCA1 and one BRCA2 mutation). The clinical characteristics of all BIMT are summarised in Table . Histological features All BIMT were well‐circumscribed, nodular tumours located within the superficial and mid‐dermis (Figure ). Twenty‐six tumours (40%) showed a minor junctional component consisting of small melanocytic nests composed of epithelioid melanocytes and few single epithelioid cells (Figure ). The tumours were composed of nests and sheets of non‐pigmented (37 tumours, 57%) or lightly pigmented (28 tumours, 43%) epithelioid cells with amphophilic cytoplasm and round to ovoid nuclei with evenly dispersed chromatin and prominent nucleoli (Figure ). Moderate to severe cytological atypia, including irregular nuclear contours, nuclear pseudoinclusions, bizarre‐formed nuclei, multinucleation and hyperchromasia, was present in 41 tumours (63%) (Figure ). No dermal mitoses were observed in the majority of BIMT (80.2%). Low mitotic activity was observed in seven tumours (10.8%), ranging from one to two mitoses per mm 2 , but no atypical mitotic figures were identified. No foci of tumour necrosis were observed in any tumour. A brisk lymphocytic inflammatory infiltrate was present in seven tumours (10.8%), a mild to moderate inflammatory infiltrate in 35 cases (53.8%) and no inflammation was seen in 23 BIMT (35.4%). Only one germline‐associated tumour showed significant stromal fibrosis; the remainder of cases had no remarkable fibrosis. One germline‐associated BIMT showed angiomatoid features with multiple small, dilated vessels intermingling with the epithelioid melanocytes. A conventional background naevus flanking the BAP1 inactivated proliferation on one or both sides was present in majority of the tumours ( n = 50, 76.9%). The background naevus made up the lesser part of the tumours in all cases; the dominant component was the dermal BAP1‐inactivated proliferation. The background conventional naevus was composed of dermal melanocytic nests only (33 cases), but also revealed compound architecture in 17 cases. Two cases showed intermingling of the banal, smaller naevus cells with the large, epithelioid BAP1‐inactivated tumour cells (Figure ), whereas most tumours had a clear demarcation of conventional naevus and BAP1‐inactivated cells. The diagnosis of the cutaneous invasive melanoma arising in one patient with BAP1 germline mutation was straightforward histopathologically. The tumour was composed of atypical spindle cells arranged in fascicles and sheets within dermis demonstrating an infiltrative growth pattern and a high mitotic rate (20 per mm 2 ). No adjacent conventional naevus nor a BIMT was present in the periphery of the melanoma. Immunohistochemistry All tumours were strongly and diffusely positive for S100 (nuclear and cytoplasmic), Sox10 (nuclear), MelanA (cytoplasmic) (Figure ) and negative for HMB45, except for a junctional component in the conventional background naevi. Nuclear p16 staining (available for 37 cases) was either retained (19 cases, 51.4%) or mosaic (Figure ). PRAME showed focal or patchy, weak nuclear staining in all tumours (Figure ). The overall combined score was low with a mean of 3 (range = 0–80), quantity range = 0–40% of tumour cells, intensity range = 0–2). Ki‐67 staining revealed a low mitotic index (Figure ) in all BAP inactivated tumours. The melanoma arising in the patient with BAP1 germline mutation showed loss of nuclear BAP1 staining but diffusely positive cytoplasmic staining, and PRAME showed focal nuclear expression. Our histopathological archives for BAP1‐inactivated tumours were searched from 2010 to 2022. The Calgary health zone includes an average of 1 500 000 people. Sixty‐five BIMT involving 31 female and seven male patients (ratio f:m = 4.4:1) were included in the study. With 38 patients affected by BIMT the estimated prevalence of BIMT in the Calgary health zone is 0.000025. With seven patients carrying a BAP1 germline mutation the prevalence of BAP1 germline mutations presenting with BIMT is approximately 0.0000047 in this cohort. The patient age ranged from 16 to 77 years with a mean of 39.6 years; two patients were younger than 18 years. All tumours were completely excised by primary excision or re‐excision. Seven patients (18.4%) had a BAP1 germline mutation. These patients presented at a younger age (range = 16–66, median = 25 years) without sex predilection (four females, three males). The results of the genetic analyses with documentation of the specific mutations were available for six patients and included the following: c.376‐2A>G, c.376‐2_392del, c.1717delCp.(Leu573TrpfsTer3), c.458_549delCT, c.1358_1359del and c.485_495delCT. The majority of BIMT were located on the trunk ( n = 26, 43%, including 15 BIMT located on the back, three on the shoulder, four on the chest and four on the abdomen) and on the head and neck area ( n = 26, 43%, 11 BIMTs on the face, seven on the ears, five on the neck and three located on the scalp). The remaining tumours were located on the extremities ( n = 13, seven located on the upper extremities, six on the lower extremities including acral sites). The tumours presented with a median size of 0.55 cm (range = 0.2–1.5 cm, available for n = 22). Patients with BAP1 germline mutations presented frequently with multiple BIMT (range of number of BIMT per patient 1–8, mean 6). The anatomical distribution did not differ significantly between germline‐associated and sporadic tumours, nor did the size. No recurrences or metastases of BIMT were noted in the entire cohort (follow‐up period = 4–111 months, mean = 44 months). One male patient with BAP1 germline mutation died of complications of mesothelioma at the age of 69 years, 42 months after the diagnosis of one BIMT. This patient's history is also remarkable for two basal cell carcinomas and an invasive melanoma that the patient developed 3 years prior to his mesothelioma. The melanoma (size 1.5 × 1.4 cm) was amelanotic, showed spindle cell morphology and a maximum tumour thickness of 2.1 cm. None of the remaining patients with BAP1 germline mutations developed a malignant tumour diagnosis during the follow‐up period (follow‐up period range = 8–111 months, mean = 49 months). According to the medical records, 26 patients also had a history of conventional melanocytic naevi unrelated to their BIMT(s). The number of conventional melanocytic naevi ranged from one to 28 per patient. Within the group of patients with sporadic BIMT, the mean number of additional conventional melanocytic naevi was four per patient. Within the group of patients with BAP1 germline mutations, the mean number of additional conventional naevi was 5.5. Within the group of patients with sporadic BIMT (patient n = 31), one patient had a remote history of melanoma (no histopathological data available), two had a history of basal cell carcinoma, one patient had diffuse large B cell lymphoma, one patient revealed a history of Hodgkin lymphoma plus a history of ductal carcinoma in‐situ of the breast, one patient developed invasive ductal carcinoma of the breast and one patient had a remote history of prostatic adenocarcinoma plus a remote history of melanoma. Two patients with sporadic BIMT harboured BRCA germline mutations (one BRCA1 and one BRCA2 mutation). The clinical characteristics of all BIMT are summarised in Table . All BIMT were well‐circumscribed, nodular tumours located within the superficial and mid‐dermis (Figure ). Twenty‐six tumours (40%) showed a minor junctional component consisting of small melanocytic nests composed of epithelioid melanocytes and few single epithelioid cells (Figure ). The tumours were composed of nests and sheets of non‐pigmented (37 tumours, 57%) or lightly pigmented (28 tumours, 43%) epithelioid cells with amphophilic cytoplasm and round to ovoid nuclei with evenly dispersed chromatin and prominent nucleoli (Figure ). Moderate to severe cytological atypia, including irregular nuclear contours, nuclear pseudoinclusions, bizarre‐formed nuclei, multinucleation and hyperchromasia, was present in 41 tumours (63%) (Figure ). No dermal mitoses were observed in the majority of BIMT (80.2%). Low mitotic activity was observed in seven tumours (10.8%), ranging from one to two mitoses per mm 2 , but no atypical mitotic figures were identified. No foci of tumour necrosis were observed in any tumour. A brisk lymphocytic inflammatory infiltrate was present in seven tumours (10.8%), a mild to moderate inflammatory infiltrate in 35 cases (53.8%) and no inflammation was seen in 23 BIMT (35.4%). Only one germline‐associated tumour showed significant stromal fibrosis; the remainder of cases had no remarkable fibrosis. One germline‐associated BIMT showed angiomatoid features with multiple small, dilated vessels intermingling with the epithelioid melanocytes. A conventional background naevus flanking the BAP1 inactivated proliferation on one or both sides was present in majority of the tumours ( n = 50, 76.9%). The background naevus made up the lesser part of the tumours in all cases; the dominant component was the dermal BAP1‐inactivated proliferation. The background conventional naevus was composed of dermal melanocytic nests only (33 cases), but also revealed compound architecture in 17 cases. Two cases showed intermingling of the banal, smaller naevus cells with the large, epithelioid BAP1‐inactivated tumour cells (Figure ), whereas most tumours had a clear demarcation of conventional naevus and BAP1‐inactivated cells. The diagnosis of the cutaneous invasive melanoma arising in one patient with BAP1 germline mutation was straightforward histopathologically. The tumour was composed of atypical spindle cells arranged in fascicles and sheets within dermis demonstrating an infiltrative growth pattern and a high mitotic rate (20 per mm 2 ). No adjacent conventional naevus nor a BIMT was present in the periphery of the melanoma. All tumours were strongly and diffusely positive for S100 (nuclear and cytoplasmic), Sox10 (nuclear), MelanA (cytoplasmic) (Figure ) and negative for HMB45, except for a junctional component in the conventional background naevi. Nuclear p16 staining (available for 37 cases) was either retained (19 cases, 51.4%) or mosaic (Figure ). PRAME showed focal or patchy, weak nuclear staining in all tumours (Figure ). The overall combined score was low with a mean of 3 (range = 0–80), quantity range = 0–40% of tumour cells, intensity range = 0–2). Ki‐67 staining revealed a low mitotic index (Figure ) in all BAP inactivated tumours. The melanoma arising in the patient with BAP1 germline mutation showed loss of nuclear BAP1 staining but diffusely positive cytoplasmic staining, and PRAME showed focal nuclear expression. Prior to their first description in 2011 by Wiesner et al . BIMT had been classified as epithelioid Spitz tumours or melanomas. Since 2011, more information and details concerning the histopathological features, the pathogenesis and the genetic background of BIMT have been gathered. BIMT show a bi‐allelic inactivation of the BAP1 tumour suppressor gene located on chromosome 3q21, which can be caused by loss‐of‐function mutations or by deletion affecting the BAP1 locus. BIMT typically arises from a conventional naevus with BRAFp.V600E or NRAS mutations or RAF1 fusion. , , The double hit results in a clonal expansion of the BAP1‐inactivated clone with the typical epithelioid phenotype. In the sporadic setting, the double hit is caused by loss‐of‐function mutations altering the BAP1 nucleotide sequence often combined with a chromosomal deletion involving the wild‐type BAP1 locus. In patients with BAP1 germline mutations, the second hit is the inactivation of the remaining wild‐type BAP1 allele. , , Despite this pathogenetic insight, data regarding the prevalence and the behaviour of BIMT have been scarce. Approximately 200 families with BAP1 germline variants have been described to date. In our cohort, seven patients presenting with BIMT carried a BAP1 germline mutation, and we calculated the prevalence to be approximately 0.0000047 in the Calgary health zone. The overall prevalence of BIMT in out cohort was 0.000025, stressing that BIMT occur more commonly in the sporadic setting than in the syndromic setting. Our data confirm that BIMT are often the primary manifestation in patients carrying a BAP1 germline mutation, and that these patients present with multiple BIMT at a young age, commonly within the second decade of life, as reported previously. The diagnosis of a single BIMT in a patient does not imply genetic testing for a BAP1‐germline mutation unless multiple BIMT are seen in the same patient or there is clinical suspicion due to a positive family history or manifestation of other tumours, especially uveal melanoma, cutaneous melanoma, mesothelioma and renal cell carcinoma. Malignant transformation of BIMT has been reported in both sporadic and germline‐associated tumours, , but the majority of BIMT show an indolent behaviour. Our data stress the indolent behaviour of BIMT in both settings, sporadic and syndromic. None of the tumours showed recurrence or aggressive behaviour despite worrisome histopathological features. No malignant transformation was noted in any of the BIMT in this study, which argues against BIMT being a significant melanoma precursor lesion. The spindle cell melanoma arising in one of the patients with BAP1 germline mutation did not show any histopathological resemblance with BIMT, despite loss of nuclear BAP1 by immunohistochemistry, and no conventional naevus or a BIMT was present in the background. The cytoplasmic expression of BAP1 in this melanoma is a finding of unknown significance. Cytoplasmic expression of BAP1 has been described in a subset of uveal melanomas, suggesting a functional role of BAP1 within the cytoplasm that warrants further investigation. Histopathologically, the classic morphology of BIMT is described as a biphasic growth pattern with a nodular or sheet‐like proliferation of epithelioid melanocytes in the background of an adjacent conventional naevus component. The larger epithelioid cells typically display abundant amphophilic cytoplasm with vesicular nuclei and prominent nucleoli, resembling the melanocytes seen in Spitz naevi. Other rare morphological features including rhabdoid, adipocyte metaplasia and nuclear pseudo‐inclusions have also been described. , , In a larger histopathological study conducted by Garfield et al ., a significant association of an extensive junctional component in BIMTs arising in patients with germline BAP1 mutations was observed. In our study, we did not see any significant histopathological differences between BIMT arising in the sporadic versus germline‐associated setting; in particular, a more prominent junctional component was occasionally present in both groups. According to our observations, germline‐associated BIMTs can present purely dermally or with a junctional involvement in the same patient. Pagetoid spread was observed in one single BIMT, occurring in a 16‐year‐old female with BAP1 germline mutation on the upper back. A prominent lymphohistiocytic infiltrate has frequently been described as a typical feature of BIMT. In our study we only saw a brisk lymphohistiocytic infiltrate in 10.8% of tumours, a mild lymphohistiocytic infiltrate in 40% of cases and no inflammatory infiltrate in 40.2%. The review of all patient records in our cohort reveals that patients with BIMT also commonly develop multiple banal conventional naevi without significant atypia, and only one patient had a history of primary cutaneous melanoma. Two patients with sporadic BIMT also had breast carcinomas and harboured BRCA mutations, which is probably a coincidental finding. The same assumption applies to the other malignant diagnoses in patients with sporadic BIMT in this cohort. PRAME immunohistochemistry achieved low combined scores of quantity and intensity in all BIMT in this study. Previous data on PRAME expression in BIMT are limited. Lopez et al . studied five BIMT, none of which had an immunoreactivity score greater than 1+ (staining of 1 to 25% of tumour cells), and all cases demonstrated a weak staining intensity. In another study, Turner et al . reviewed PRAME immunohistochemistry in a small number of BIMT. In their study, diffuse PRAME positivity (defined as at least weak nuclear positivity in > 75% of atypical cells) was present in two of five cases. The remaining three cases showed non‐diffuse or negative PRAME staining. Both studies used a different PRAME antibody and incubation protocol compared to our study. Taken together, the data suggest that diffuse PRAME positivity in BIMT is a rare phenomenon. BIMT are indolent tumours characterised by large dermal epithelioid melanocytes with nuclear loss of BAP1, and present with a variable amount of cytological atypia and low mitotic activity. A conventional background naevus is present in > 75% of cases that should not mislead to a diagnosis of melanoma arising in a naevus. Malignant transformation of BIMT was not noted. Most BIMT occur sporadically as single tumours in patients who may also develop conventional melanocytic naevi without significant atypia. When arising in patients with BAP1 germline mutation, BIMT are often multiple and affect patients in their second decade of life. Cutaneous melanoma arising in patients with BAP1 germline mutation develop de‐novo without precursor in majority of patients. PRAME consistently shows patchy and weak staining in BIMT and serves as a reassuring tool to distinguish BIMT from melanoma. Yitong Xu: data collection, analysis and interpretation of results. Alejandro A Gru: immunohistochemical staining and interpretation of results. Thomas Brenn: study conception and design, analysis and interpretation of results. Katharina Wiedemeyer: study conception and design, interpretation of results and manuscript preparation. The authors have nothing to disclose.
Genetic heterogeneity of pediatric systemic lupus erythematosus with lymphoproliferation
dcf1df38-8262-42e4-b851-ed5126cbd3b5
7254811
Pediatrics[mh]
Introduction Autoimmune and immunodeficiency diseases are outcomes of a dysfunctional immune system and represent 2 sides of the same coin. Multiple single-gene defects have been identified, resulting in rare diseases with features of both autoimmunity and immunodeficiency. Systemic lupus erythematosus (SLE; Online Mendelian Inheritance in Man [OMIM] 152700) is a prototype autoimmune disease with a strong genetic component characterized by differences in autoantibody profile, serum cytokines, and multisystem involvement commonly affecting the skin, renal, musculoskeletal, and hematopoietic systems. Early onset, familial, and/or syndromic SLE may reveal monogenic pathologies. Autoimmune lymphoproliferative syndrome (ALPS; OMIM 601859), a disease of lymphocyte homeostasis caused by dysfunction of the Fas Cell Surface Death Receptor (FAS)-mediated apoptotic pathway caused by defective lymphocyte homeostasis, is characterized by lymphadenopathy, hepatomegaly, splenomegaly, and autoimmune disease. Rat sarcoma (RAS)-associated autoimmune leukoproliferative disease (RALD; OMIM 614470) also presents as autoimmunity, lymphadenopathy, and/or splenomegaly. At the molecular level, RALD is defined by somatic mutations of either the NRAS or KRAS gene in a subset of hematopoietic cells. Signal transducer and activator of transcription 3 (STAT3) gain-of-function syndrome (OMIM 615952) is a new clinical entity characterized by early onset poly-autoimmunity, lymphoproliferation, and growth failure. Cell-surface interleukin-2 receptor α (IL2RA, CD25) expression is critical for maintaining immune function and homeostasis. Human IL2RA null mutation mediates immunodeficiency with lymphoproliferation and autoimmunity (IL2RA deficiency; OMIM 606367). Therefore, we performed whole-exome sequencing (WES) in children with SLE with lymphoproliferation to identify genes associated with these conditions. Method The study was approved by the Ethics Committee at the Children's Hospital of Fudan University, Shanghai, China. All the patients’ parents provided written informed consent for enrollment in this study. 2.1 Patients In total, 7 Chinese SLE children from 7 unrelated families were enrolled in this study. All patients fulfilled four 2019 European League Against Rheumatism/American College of Rheumatology (EULAR/ACR) criteria for the classification of SLE. Demographic data, clinical manifestations, laboratory and histopathologic findings, treatment, and outcome were documented. All patients were admitted to or followed up at our center (Children's Hospital of Fudan University) between 2011 and 2019. The deadline date of follow-up was August 2019. 2.2 DNA sequencing Genomic DNA was extracted and purified from peripheral leukocytes in whole-blood samples by a DNA isolation kit (Qiagen, Hilden, Germany). WES and bioinformatic analysis were performed in patient families as previously described. Only genes listed in OMIM ( https://www.omim.org/ ) were considered candidate causative genes. Variants identified by WES were confirmed by Sanger sequencing. 2.3 Peripheral blood mononuclear cell isolation and cell culture Peripheral venous blood was drawn from one healthy volunteer and 4 patients with NRAS mutations. The ethylenediaminetetraacetic acid-anticoagulated blood was diluted with an equal volume of phosphate-buffered saline (PBS), pH 7.4. The diluted blood was carefully added to the top of the Ficoll-Paque PLUS (GE Healthcare, Shanghai, China) and centrifuged at 2000 rpm for 10 minutes at room temperature. The top layer containing plasma was removed, and the remaining blood was diluted with an equal volume of PBS. After being washed twice in PBS, peripheral blood mononuclear cells (PBMCs) were cultured in RPMI 1640 supplemented with 10% fetal calf serum (FCS) at a density of 1 × 10 6 cells/mL. After incubation in a 24-well plate at 37°C in 5% CO 2 for 24 hours, the cells were harvested for subsequent experiments. 2.4 Western blot analysis Total and nuclear proteins were extracted using a protein extraction kit (Beyotime, Shanghai, China) following the manufacturer's instructions. Equal amounts of cytoplasmic or nuclear extracts were separated by 10% sodium dodecyl sulfate polyacrylamide gel electrophoresisand transferred to 0.45 μm PVDF membranes (Millipore, MA, USA). Blots were probed with primary antibodies against BCL-2-interacting mediator of cell death (BIM) and β-actin (Cell Signaling Technology, Beverly, MA). Primary antibodies were detected with horseradish peroxidase-conjugated secondary antibody. Visualization was conducted using an Enhanced chemiluminescence (ECL) peroxidase substrate. Patients In total, 7 Chinese SLE children from 7 unrelated families were enrolled in this study. All patients fulfilled four 2019 European League Against Rheumatism/American College of Rheumatology (EULAR/ACR) criteria for the classification of SLE. Demographic data, clinical manifestations, laboratory and histopathologic findings, treatment, and outcome were documented. All patients were admitted to or followed up at our center (Children's Hospital of Fudan University) between 2011 and 2019. The deadline date of follow-up was August 2019. DNA sequencing Genomic DNA was extracted and purified from peripheral leukocytes in whole-blood samples by a DNA isolation kit (Qiagen, Hilden, Germany). WES and bioinformatic analysis were performed in patient families as previously described. Only genes listed in OMIM ( https://www.omim.org/ ) were considered candidate causative genes. Variants identified by WES were confirmed by Sanger sequencing. Peripheral blood mononuclear cell isolation and cell culture Peripheral venous blood was drawn from one healthy volunteer and 4 patients with NRAS mutations. The ethylenediaminetetraacetic acid-anticoagulated blood was diluted with an equal volume of phosphate-buffered saline (PBS), pH 7.4. The diluted blood was carefully added to the top of the Ficoll-Paque PLUS (GE Healthcare, Shanghai, China) and centrifuged at 2000 rpm for 10 minutes at room temperature. The top layer containing plasma was removed, and the remaining blood was diluted with an equal volume of PBS. After being washed twice in PBS, peripheral blood mononuclear cells (PBMCs) were cultured in RPMI 1640 supplemented with 10% fetal calf serum (FCS) at a density of 1 × 10 6 cells/mL. After incubation in a 24-well plate at 37°C in 5% CO 2 for 24 hours, the cells were harvested for subsequent experiments. Western blot analysis Total and nuclear proteins were extracted using a protein extraction kit (Beyotime, Shanghai, China) following the manufacturer's instructions. Equal amounts of cytoplasmic or nuclear extracts were separated by 10% sodium dodecyl sulfate polyacrylamide gel electrophoresisand transferred to 0.45 μm PVDF membranes (Millipore, MA, USA). Blots were probed with primary antibodies against BCL-2-interacting mediator of cell death (BIM) and β-actin (Cell Signaling Technology, Beverly, MA). Primary antibodies were detected with horseradish peroxidase-conjugated secondary antibody. Visualization was conducted using an Enhanced chemiluminescence (ECL) peroxidase substrate. Results 3.1 Clinical data 3.1.1 Clinical characteristics All 7 patients were Chinese. Four children were male, and 3 were female. No consanguinity was reported within the 7 families. The average age at onset was 5.0 years (range from 1.2 to 10.0 years). The most common features were renal (proteinuria and/or hematuria; 7/7 patients) and hematologic (cytopenia; 6/7 patients) involvement and recurrent fever (6/7 patients), while only 2 patients presented with skin involvement. Antinuclear antibodies at a titer of ≥1:320 were positive in all patients. They fulfilled 2019 EULAR/ACR criteria for the classification of SLE. All patients had hepatomegaly and/or splenomegaly and/or lymphadenectasis. Bone marrow aspiration in all patients showed no malignant cells or nonspecific changes. Cervical lymph node biopsy also revealed no malignant cells or nonspecific changes (lymphocyte proliferation) in patients 2 and 5. Liver biopsy revealed fibrosis in patient 7. Patient 1 had macrophage activation syndrome (MAS) before admission to our center, and P3 presented with MAS after disease flare up, both characterized by cytopenia, hyperferritinemia, hypertriglyceridemia, hypofibrinogenemia, and increased levels of alanine aminotransferase, glutamic-oxalacetic transaminase, and lactate dehydrogenase. The clinical and laboratory characteristics of the patients are summarized in Tables and . 3.2 Immunologic features The immunologic characteristics of the patients are listed in Table . The B lymphocyte subgroups were elevated in patients 1,4 and 5, normal in patients 2 and 3, and decreased in patients 6 and 7. However, the immunoglobulin (Ig)G level was elevated in all patients. Increased IgA levels in patients 1 to 3 and 6 and normal IgA levels in patients 4, 5, and 7 were detected. IgM levels were elevated in patients 1, 6, and 7 and normal in patients 2 to 5. Except for the normal IgE level in patient 3, an increased IgE level was detected in the other patients. We observed that the proportion of double-negative T cells (DNT) (CD3 + CD4 − CD8 − ) was increased in patient 7, while it was normal in others. The proportion of CD3 + CD4 + CD8 − T cells was reduced in patients 1 and 4 to 6 and normal in patients 2, 3, and 7. The proportion of CD3 + CD4 − CD8 + T cells was elevated in patients 3, 6, and 7 and normal in patients 1, 2, 4, and 5. Reduced numbers of natural killer cells (CD3 − CD16 + CD56 + ) in patients 2–6, and normal numbers of natural killer cells (CD3 − CD16 + CD56 + ) in patients 1 and 7 were detected. Obvious monocytosis was found in almost all patients by routine blood examination. 3.3 Therapy and follow-up The mean follow-up time was 4.5 years (range from 1.6 to 7.7 years). All patients were treated with hydroxychloroquine, glucocorticoid, and immunosuppressive agents, including cyclosporine, mycophenolatemofetil, tacrolimus, and cyclophosphamide (Fig. ). Methylprednisolone pulse therapy was given to patients 1 (at the local hospital), 4 and 5 at the beginning because of severe conditions. Mycophenolatemofetil was switched to sirolimus in patient 6, and etanercept was added to patient 7 after molecular diagnosis. Patient 1 was treated with cyclosporine at the beginning, and patient 3 was replaced with cyclosporine, both because of being complicated by MAS. The disease flared up again during tapering of glucocorticoids and reduction in the dose of immunosuppressive agents in patients 1, 3, 4, and 7. 3.4 Whole-exome sequencing An average of 11.6 Gb of raw sequence data was generated with 92.68 × depth of exome target regions for each individual as paired-end 150 base pair reads. A total of 91.4% of the raw date sequencing quality was above Q30. The coverage of at least 10 × and 20 × of the target regions was 99.62% and 97.6%, respectively. We identified a heterozygous c.38 A>G mutation (p.G13C) in the NRAS gene in peripheral venous blood from patients 1 to 4. Neither parent harbored a mutation in the NRAS gene, suggesting that the patient harbored a de novo germline or somatic mutation (Fig. ). Another 2 heterozygous mutations, c.559C>T (p.Q187X) in the TNFAIP3 gene and c.3061G>A (p.E1021K) in the PIK3CD gene were detected in 2 patients. The former is from her father, and the latter is absent from their parents. No mutations were detected in P5, and no mutations in other genes associated with primary immunodeficiencies and monogenic SLE were identified in all patients. 3.5 Sanger sequencing All mutations were confirmed by Sanger sequencing in 7 families (Fig. ). Using DNA extracted from somatic cells (nails and buccal mucosa) in patients with NRAS mutations, NRAS exon 1 was amplified by PCR, and then the products were cloned. Mutated alleles were observed less frequently in the buccal mucosa and nails (42.8% and 8.8%, respectively) than in the blood (52.0%) in patient 1 (Fig. A). Similar results were found in the other 3 patients (Fig. B). The exon, including 559C>T in the TNFAIP3 gene, was screened by Sanger sequencing in patient 7's grandparents. The mutation was not identified in her grandparents. All mutations were checked in mutation databases for human populations, such as ExAC Browser ( http://exac.broadinstitute.org/ ), 1000 Genomes ( http://www.internationalgenome.org/ ), and HGMD ( http://www.hgmd.cf.ac.uk/ac/index.php ). They were all found in the above mutation databases. 3.6 Levels of BIM in PBMCs from patients Gain-of-function NRAS mutations hyperactivate the RAS/RAF/ERK pathway, which in turn negatively regulates BIM expression in patients with NRAS mutations. Western blot analysis showed that BIM levels in PBMCs from 4 patients were markedly reduced, whereas those in the control were normal (Fig. ). Clinical data 3.1.1 Clinical characteristics All 7 patients were Chinese. Four children were male, and 3 were female. No consanguinity was reported within the 7 families. The average age at onset was 5.0 years (range from 1.2 to 10.0 years). The most common features were renal (proteinuria and/or hematuria; 7/7 patients) and hematologic (cytopenia; 6/7 patients) involvement and recurrent fever (6/7 patients), while only 2 patients presented with skin involvement. Antinuclear antibodies at a titer of ≥1:320 were positive in all patients. They fulfilled 2019 EULAR/ACR criteria for the classification of SLE. All patients had hepatomegaly and/or splenomegaly and/or lymphadenectasis. Bone marrow aspiration in all patients showed no malignant cells or nonspecific changes. Cervical lymph node biopsy also revealed no malignant cells or nonspecific changes (lymphocyte proliferation) in patients 2 and 5. Liver biopsy revealed fibrosis in patient 7. Patient 1 had macrophage activation syndrome (MAS) before admission to our center, and P3 presented with MAS after disease flare up, both characterized by cytopenia, hyperferritinemia, hypertriglyceridemia, hypofibrinogenemia, and increased levels of alanine aminotransferase, glutamic-oxalacetic transaminase, and lactate dehydrogenase. The clinical and laboratory characteristics of the patients are summarized in Tables and . Clinical characteristics All 7 patients were Chinese. Four children were male, and 3 were female. No consanguinity was reported within the 7 families. The average age at onset was 5.0 years (range from 1.2 to 10.0 years). The most common features were renal (proteinuria and/or hematuria; 7/7 patients) and hematologic (cytopenia; 6/7 patients) involvement and recurrent fever (6/7 patients), while only 2 patients presented with skin involvement. Antinuclear antibodies at a titer of ≥1:320 were positive in all patients. They fulfilled 2019 EULAR/ACR criteria for the classification of SLE. All patients had hepatomegaly and/or splenomegaly and/or lymphadenectasis. Bone marrow aspiration in all patients showed no malignant cells or nonspecific changes. Cervical lymph node biopsy also revealed no malignant cells or nonspecific changes (lymphocyte proliferation) in patients 2 and 5. Liver biopsy revealed fibrosis in patient 7. Patient 1 had macrophage activation syndrome (MAS) before admission to our center, and P3 presented with MAS after disease flare up, both characterized by cytopenia, hyperferritinemia, hypertriglyceridemia, hypofibrinogenemia, and increased levels of alanine aminotransferase, glutamic-oxalacetic transaminase, and lactate dehydrogenase. The clinical and laboratory characteristics of the patients are summarized in Tables and . Immunologic features The immunologic characteristics of the patients are listed in Table . The B lymphocyte subgroups were elevated in patients 1,4 and 5, normal in patients 2 and 3, and decreased in patients 6 and 7. However, the immunoglobulin (Ig)G level was elevated in all patients. Increased IgA levels in patients 1 to 3 and 6 and normal IgA levels in patients 4, 5, and 7 were detected. IgM levels were elevated in patients 1, 6, and 7 and normal in patients 2 to 5. Except for the normal IgE level in patient 3, an increased IgE level was detected in the other patients. We observed that the proportion of double-negative T cells (DNT) (CD3 + CD4 − CD8 − ) was increased in patient 7, while it was normal in others. The proportion of CD3 + CD4 + CD8 − T cells was reduced in patients 1 and 4 to 6 and normal in patients 2, 3, and 7. The proportion of CD3 + CD4 − CD8 + T cells was elevated in patients 3, 6, and 7 and normal in patients 1, 2, 4, and 5. Reduced numbers of natural killer cells (CD3 − CD16 + CD56 + ) in patients 2–6, and normal numbers of natural killer cells (CD3 − CD16 + CD56 + ) in patients 1 and 7 were detected. Obvious monocytosis was found in almost all patients by routine blood examination. Therapy and follow-up The mean follow-up time was 4.5 years (range from 1.6 to 7.7 years). All patients were treated with hydroxychloroquine, glucocorticoid, and immunosuppressive agents, including cyclosporine, mycophenolatemofetil, tacrolimus, and cyclophosphamide (Fig. ). Methylprednisolone pulse therapy was given to patients 1 (at the local hospital), 4 and 5 at the beginning because of severe conditions. Mycophenolatemofetil was switched to sirolimus in patient 6, and etanercept was added to patient 7 after molecular diagnosis. Patient 1 was treated with cyclosporine at the beginning, and patient 3 was replaced with cyclosporine, both because of being complicated by MAS. The disease flared up again during tapering of glucocorticoids and reduction in the dose of immunosuppressive agents in patients 1, 3, 4, and 7. Whole-exome sequencing An average of 11.6 Gb of raw sequence data was generated with 92.68 × depth of exome target regions for each individual as paired-end 150 base pair reads. A total of 91.4% of the raw date sequencing quality was above Q30. The coverage of at least 10 × and 20 × of the target regions was 99.62% and 97.6%, respectively. We identified a heterozygous c.38 A>G mutation (p.G13C) in the NRAS gene in peripheral venous blood from patients 1 to 4. Neither parent harbored a mutation in the NRAS gene, suggesting that the patient harbored a de novo germline or somatic mutation (Fig. ). Another 2 heterozygous mutations, c.559C>T (p.Q187X) in the TNFAIP3 gene and c.3061G>A (p.E1021K) in the PIK3CD gene were detected in 2 patients. The former is from her father, and the latter is absent from their parents. No mutations were detected in P5, and no mutations in other genes associated with primary immunodeficiencies and monogenic SLE were identified in all patients. Sanger sequencing All mutations were confirmed by Sanger sequencing in 7 families (Fig. ). Using DNA extracted from somatic cells (nails and buccal mucosa) in patients with NRAS mutations, NRAS exon 1 was amplified by PCR, and then the products were cloned. Mutated alleles were observed less frequently in the buccal mucosa and nails (42.8% and 8.8%, respectively) than in the blood (52.0%) in patient 1 (Fig. A). Similar results were found in the other 3 patients (Fig. B). The exon, including 559C>T in the TNFAIP3 gene, was screened by Sanger sequencing in patient 7's grandparents. The mutation was not identified in her grandparents. All mutations were checked in mutation databases for human populations, such as ExAC Browser ( http://exac.broadinstitute.org/ ), 1000 Genomes ( http://www.internationalgenome.org/ ), and HGMD ( http://www.hgmd.cf.ac.uk/ac/index.php ). They were all found in the above mutation databases. Levels of BIM in PBMCs from patients Gain-of-function NRAS mutations hyperactivate the RAS/RAF/ERK pathway, which in turn negatively regulates BIM expression in patients with NRAS mutations. Western blot analysis showed that BIM levels in PBMCs from 4 patients were markedly reduced, whereas those in the control were normal (Fig. ). Discussion Here, we report a cohort of patients with SLE and chronic lymphoproliferation. The clinical and laboratory data in all patients fulfilled four 2019 EULAR/ACR criteria for the classification of SLE. The average age at onset was 5.0 years (range from 1.2 to 10.0 years). The male-to-female ratio was 4:3. In recent years, multiple monogenic causes of early onset autoimmunity and lymphoproliferation have been identified, such as the FAS , CASPAS10 , NRAS , IL2RA , and STAT3 genes. Therefore, we performed WES in our patients. The results of our study showed germline mutations in the TNFAIP3 and PIK3CD genes and somatic mutations in the NRAS gene and no mutations in other genes associated with primary immunodeficiencies and monogenic SLE in patients, such as the FAS , CASPAS10 , IL2RA , and STAT3 genes. WES revealed a heterozygous c.559C>T (p. Q187X) mutation in the TNFAIP3 gene in patient 7, which is from her father, and not identified in her grandparents. The patient was reported in our previous study. Recently, heterozygous germline mutations in the TNFAIP3 gene have been found to cause haploinsufficiency of A20, which displays an early onset autoinflammatory disease mainly characterized by SLE or Behçet-like disease. Mutations in the TNFAIP3 gene have also been reported in children with uncharacterized autoimmune diseases and lymphoproliferation and the autoimmune lymphoproliferative syndrome phenotype. The de novo mutation c.3061G>A (p.E1021K) in the PIK3CD gene was detected in patient 6, which was also reported in our previous study. Gain-of-function mutations in the PIK3CD gene, encoding phosphatidylinositol 3-kinase (PI3K) p110δ, were recently associated with a novel combined immune deficiency characterized by recurrent sinopulmonary infections, reduced class-switched memory B cells, lymphadenopathy, CD4 + lymphopenia, Cytomegalovirus (CMV) and/or epstein-barr virus (EBV) viremia and EBV-related lymphoma. PI3Kδ contributes to the induction of enhanced SLE memory T-cell survival, and its pathway is frequently activated in SLE patient PBMCs and T cells, more markedly in active disease phases. Additionally, the magnitude of PI3K pathway activation in patients with SLE paralleled activated/memory T-cell accumulation. Therefore, the PI3K pathway may be involved in human SLE. A heterozygous mutation, c.38 A>G (p.G13C) in the NRAS gene was identified in patients 1 to 4. Neither parent harbored a mutation in the NRAS gene, suggesting that the patient harbored a de novo germline or somatic mutation. Using DNA extracted from somatic cells (nails and buccal mucosa), NRAS exon 1 was amplified by PCR. Then, the products were cloned. Mutated alleles were observed less frequently in the buccal mucosa and nails (42.8% and 8.8%, respectively) than in the blood (52.0%) in patient 1. Similar results were found in the other 3 patients. Consequently, these patients harbored a somatic NRAS mutation. Obvious monocytosis in routine blood examination and elevated IgG levels in serum were found, while the CD3 + TCRαβ + CD4 - CD8 - (αβ-DNT) cell count was normal. These 4 patients fulfilled RALD diagnosis based on lymphoproliferation, autoimmune cytopenia, and without a defect in FAS-dependent apoptosis or an increase in peripheral αβ-DNT cells. The NRAS is a member of the p21 small GTPase family of proteins that also includes HRAS and KRAS . Germline RAS mutations are associated with specific developmental disorders, including Noonan (NS; OMIM 613224), Costello (OMIM 218040), and cardiofaciocutaneous syndromes (OMIM 115150). Somatic RAS mutations are seen in 30% of all human cancers. A previous study confirmed that the G13D NRAS mutation in germline cell causes BIM downregulation and defective intrinsic mitochondrial apoptosis prominently in lymphocytes, leading to RALD and hematopoietic malignancies. However, another study revealed that somatic mosaicism, again for the G13D NRAS mutation, causes BIM downregulation in activated T cells from children's patients, leading to RALD and juvenile myelomonocytic leukemia. Western blot analysis in our study showed that BIM levels in PBMCs from these 4 patients were markedly reduced, whereas those in the control were normal. However, our patients all presented with SLE. Thus, SLE may be a novel phenotype of patients with somatic NRAS mutations. Interestingly, both germinal and somatic mutations in the NRAS gene might be involved in the pathogenesis of autoimmune diseases. RASopathies are autosomal dominant neurodevelopmental syndromes resulting from germline mutations in genes that participate in the rat sarcoma/mitogen-activated protein kinases pathway, an important signal transduction pathway through which extracellular ligands stimulate cell proliferation, differentiation, survival, and metabolism. The association between RASopathies and autoimmunity has been highlighted by the presence of autoimmune antibodies in 52% of 42 patients with RASopathies, including 39% of 37 NS patients. Of these, 6 patients fulfilled the clinical criteria for autoimmune diseases, including SLE. The prevalences of NS and SLE are approximately 1 per 2000 births and 3.3 to 24 per 100,000 children, respectively. The relationship of these 2 rare diseases and the high overall percentage of patients with NS who have autoimmune features suggest that they might be related and that RASopathies must be added to this growing list of monogenic SLE, including NRAS gene mutations. In the typical form of SLE, SLE is considered a disease of women of reproductive age, although males or females of any age can be affected. It is very rare in <5 years. The typical age at diagnosis is between 15 and 45 years. The female to male ratio varies among cohorts but is generally estimated at approximately 9:1 and 4:1 in adult and child onset disease, respectively. The more common early manifestations are arthritis, photosensitive rashes, glomerulonephritis, and cytopenias. Of these patients with mutations in our study, the most commonly affected systems or features were renal (6/6 patients) and hematologic (6/6 patients) involvement and recurrent fever (6/6 patients), while only 1 patient presented with skin involvement. The average age of onset was 4 years. Thus, SLE differs from classic SLE presentation by a higher male-to-female ratio of 1:1, a lower rate of skin involvement (1/6 patients), and the occurrence of a lymphoproliferative disorder in some patients. So we think if patients with SLE and lymphoproliferation present with renal and hematologic involvement and recurrent fever, they need gene testing, especially in male patients. A few previous reports showed that patients with somatic NRAS or KRAS mutations could follow a more benign clinical course requiring minimal medications. However, MAS was observed in patient 1 at the beginning and in patient 3 after disease flare, both characterized by fever, multilineagecytopenia, hyperferritinemia, hypertriglyceridemia, and hypofibrinogenemia. Our patients were all treated with oral prednisolone and suppressive agents, and methylprednisolone pulse therapy was given to patients 1, 4, and 5 at the beginning because of severe conditions. Clinical features improved rapidly after treatment. However, the disease flared up again when oral doses of prednisolone were tapered to 1.25 mg per day to 5 mg per day in 3 patients (1, 3, and 4). Therefore, we believe SLE complicated with lymphoproliferative disorder, caused by associated gene mutations, is not a benign disease. In addition, it is not yet clear whether patients with somatic NRAS mutations progress to full-blown disease and maintain a stable clinical course. These patients need to be monitored carefully. No mutations in genes associated with monogenic SLE and primary immunodeficiencies were detected in patient 5. She presented with persistent cervical lymphadenopathy, proteinuria, hematuria, purpura in the lower limbs, and no recurrent fever or hematologic involvement, which are different from those with mutations in this study. Conclusion Our findings revealed SLE may be a novel phenotype of somatic mutation in the NRAS gene and germline mutation in the PI3CKD gene. These genes, NRAS , TNFAIP3 , and PIK3CD , should be considered candidates for children SLE with lymphoproliferation. There are some limitations in our study, such as low number of cases, relatively single phenotype, and only 1 healthy control used for western blot analysis. An unbiased genetic screening of larger cohorts of patients with childhood-onset SLE with diverse clinical presentations is needed to better estimate the relations between genotypes and phenotypes of monogenic SLE. In addition, WES is an effective method for identifying clinically significant exonic variants. However, there are some limitations for evolutionary conserved regulator DNA elements in untranslated, intronic, and intergenic regions that may be associated with the disease. Whole-genome sequencing can cover these limitations and also identify small copy number variations and mitochondrial DNA mutations. The authors thank the patients and their parents. Conceptualization: Guomin Li, Haimei Liu, Li Sun. Data curation: Guomin Li, Haimei Liu, Yi-fan Li, Wanzhen Guan, Hong Xu. Investigation: Guomin Li, Yifan Li, Wanzhen Guan, Hong Xu, Tao Zhang. Methodology: Guomin Li, Bingbing Wu, Yao Wen, Yu Shi. Supervision: Li Sun. Validation: Haimei Liu, Hong Xu, Bingbing Wu, Tao Zhang, Yao wen, Yu Shi. Visualization: Yi-fan Li, Wanzhen Guan, Tao Zhang, Yu Shi. Writing – original draft: Guomin Li. Writing – review & editing: Li Sun.
An
90c90e2f-3fe5-46b9-be0e-3da0c9e3f020
9664072
Physiology[mh]
Intracellular electrophysiology, as performed via the whole-cell patch-clamp technique, is a hallmark method for characterizing the biophysical features of neurons. While there have been numerous datasets characterizing these features from cortical neurons in the rodent brain , comparatively fewer resources provide high-quality whole-cell patch-clamp recordings from human cortical neurons due to the relative inaccessibility of human tissue. However, collaborations between neurosurgeons and basic neuroscientists have recently made it possible to characterize living cortical neurons in brain slices immediately prepared from biopsies following routine neurosurgery . Still, there remain relatively few datasets of human cortical neuron physiology that are openly accessible and free for reuse to complement and compare to the Allen Brain Institute Cell Types Database (Allen Cell Types Database, RRID:SCR_014806 ) . Here, we describe an openly accessible dataset of electrophysiological recordings from human and mouse cortical neurons. The dataset encompasses 132 whole-cell patch-clamp recordings from surgically resected human tissue (118 cells from 35 individuals) or from 21-day-old mice (11 cells from 5 mice). These datasets are made available in the Neurodata Without Borders (NWB) ( RRID:SCR_015242 ) electrophysiology data format via the Distributed Archives for Neurophysiology Data Integration (DANDI) data archive. We provide morphological reconstructions for N = 7 cells, made available at NeuroMorpho.org ( RRID:SCR_002145 ). Each recording is made available with rich subject and experimental protocol metadata, enabling subsequent reuse and comparison with analogous datasets from other species and sources. Human surgical tissue Resected human cortical tissues were obtained from Toronto Western Hospital (University Health Network, Canada). All procedures on human tissue were performed in accordance with the Declaration of Helsinki and approved by the University Health Network Research Ethics board . Patients underwent a standardized temporal, parietal, or frontal lobectomy under general anesthesia using volatile anesthetics for seizure or tumor treatment . Tissue was obtained from patients diagnosed with temporal ( n = 34), frontal ( n = 1), or parietal lobe ( n = 1) epilepsy or brain tumors ( n = 4) in 17 male and 18 female patients, age ranging from 21 to 59 years (mean age ± SD: 40.5 ± 12.0). Written informed consent was obtained from all study participants to use their tissue and to share the acquired data with anonymized demographic information—namely, subject age at time of surgery, sex, years of seizure, seizure frequency, secondarily generalized seizure frequency (using clinical records and epilepsy monitoring unit recordings), antiepileptic drug treatment, and type of seizure. The resected cortical tissue from the temporal lobe–middle temporal gyrus exhibited no structural or functional abnormalities in preoperative magnetic resonance imaging and was considered “relatively healthy” by ourselves and others as it was located outside of the site of epileptogenesis . Cortical tissue from the frontal cortex from patients with epilepsy was considered “epileptogenic” tissue and confirmed with independent electrocorticography (and annotated as such in our metadata). For tumor cases, cortical tissue blocks were obtained from tissue at a distance from the main site of the tumor (i.e., such cortical tissue was not taken directly from the tumor itself). Mouse specimens All experimental procedures involving mice were reviewed and approved by the animal care committees of the University Health Network in accordance with the guidelines of the Canadian Council on Animal Care. Mixed male and female wild-type C57Bl/6 J, age postnatal 21 days, were used for experiments. Mice were kept on a 12-hour light/dark cycle and had free access to food and water. Acute brain slice preparation Immediately following surgical human cortical resection, the cortical specimens were submerged in an ice-cold (∼4°C) cutting solution that was continuously bubbled with carbogenated (95% O 2 /5% CO 2 ) artificial cerebrospinal fluid (aCSF) containing (in mM) the following: sucrose, 248; KCl, 2; MgSO 4 .7H 2 O, 3; CaCl 2 .2H 2 O, 1; NaHCO 3 , 26; NaH 2 PO 4 .H 2 O, 1.25; and D-glucose, 10. The osmolarity was adjusted to 300–305 mOsm. Transverse brain slices (400 μm) were sectioned using a vibratome (Leica 1200 V) Germany in cutting solution. Tissue slicing was performed perpendicular to the pial surface to help ensure that pyramidal cell dendrites were minimally truncated . The cutting solution was the same as used for transport of tissue from the operating room to the laboratory. The time between tissue resection and slice preparation was less than 10 minutes. After sectioning, the slices were incubated for 30 minutes at 34°C in standard aCSF (in mM): NaCl, 123; KCl, 4; CaCl 2 .2H 2 O, 1; MgSO 4 .7H 2 O, 1; NaHCO 3 , 26; NaH 2 PO 4 .H 2 O, 1.2; and D-glucose, 10, pH 7.40. All aCSF and cutting solutions were continuously bubbled with carbogen gas (95% O 2 –5% CO 2 ) and had an osmolarity of 300–305 mOsm. Following this incubation, the slices were maintained in standard aCSF at 22–23°C for at least 1 hour, until they were individually transferred to a submerged recording chamber. Brain slice preparation was done in a similar way for mice and human tissue. Mice were deeply anesthetized by isoflurane 1.5–3.0%. After decapitation, brains were submerged in (∼4°C) cutting solution that was continuously bubbled with 95% O 2 –5% CO 2 containing (in mM) sucrose, 248; KCl, 2; MgSO 4 .7H 2 O, 3; CaCl 2 .2H 2 O, 1; NaHCO 3 , 26; NaH 2 PO 4 .H 2 O, 1.25; and D-glucose, 10. Mouse somatosensory cortical slices (350 μm) were prepared in the coronal plane similar to human slice preparation as described above. A subset of cortical slices in both human and mouse was prepared using the N-methyl-D-glucamine (NMDG) protective recovery method . The cortical tissue blocks were transferred and sectioned in 2–4 °C NMDG-HEPES aCSF solution containing (in mM) NMDG, 92; KCl, 2.5; NaH 2 PO 4 , 1.25; NaHCO 3 , 30; HEPES, 20; glucose, 25; thiourea, 2; Na–L-ascorbate, 5; Na-pyruvate, 3; CaCl 2 .4H 2 O, 0.5; and MgSO 4 .7H 2 O 10 (mM). The pH of NMDG-HEPES aCSF solution was adjusted to 7.3–7.4 using hydrochloric acid, and the osmolarity was 300–305 mOsm. The cortical slices were prepared using a vibratome as described above. After slicing, slices were transferred to a recovery chamber filled with 32–34 °C NMDG-HEPES aCSF solution, which continuously bubbled with 95% O 2 –5% CO 2 . After 12 minutes, the slices were transferred to an incubation solution—HEPES aCSF—containing (in mM) NaCl, 92; KCl, 2.5; NaH 2 PO 4 .H 2 O, 1.25; NaHCO 3 , 30; HEPES, 20; glucose, 25; thiourea, 2; Na–L-ascorbate, 5; Na-pyruvate, 3; CaCl 2 .4H 2 O, 2; and MgSO 4 .7H 2 O, 2. After a 1-hour incubation at room temperature, slices were transferred to a recording chamber and continuously perfused with aCSF containing (in mM) NaCl, 126; KCl, 2.5; NaH 2 PO 4 .H 2 O, 1.25; NaHCO 3 , 26; glucose, 12.6; CaCl 2 .2H 2 O, 2; and MgSO 4 .7H 2 O 1 (mM) . Whole-cell patch-clamp recording from human and mice cortical slices For electrophysiological recordings, cortical slices were placed in a recording chamber mounted on a fixed-stage upright microscope (Axioskop 2 FS MOT; Carl Zeiss, Germany) Oberkochen, Baden-Württemberg. Slices were continuously perfused with carbogenated (95% O 2 /5% CO 2 ) aCSF containing (in mM) NaCl, 123; KCl, 4; CaCl 2 .2H 2 O, 1.5; MgSO 4 .7H 2 O, 1.3; NaHCO 3 , 26; NaH 2 PO 4 .H 2 O, 1.2; and D-glucose, 10, pH 7.40, at 32–34°C. Cortical neurons were visualized using an IR-CCD camera (IR-1000; MTI, USA) Albany, NY with a 40× water immersion objective. Patch pipettes (3–6 MΩ) were pulled from standard borosilicate glass pipettes (thin-wall borosilicate tubes with filaments; World Precision Instruments, Sarasota, FL, USA) using a vertical puller (PC-10; Narishige) Japan. For somatic recording of electrophysiological properties, patch pipettes were filled with intracellular solution containing (in mM) K-gluconate, 135; NaCl, 10; HEPES, 10; MgCl 2 , 1; Na 2 ATP, 2; and GTP 0.3, pH adjusted with KOH to 7.4 (290–309 mOsm). A subset of data was collected with excitatory (APV 50 μM, Sigma [St. Louis, MO, USA]; CNQX 25 μM, Sigma) and inhibitory (Bicuculline 10 μM, Sigma; CGP-35348 10 μM, Sigma) synaptic activity blocked. Electrical signals were measured with a Multiclamp 700A amplifier, Axopatch 200B amplifier, pClamp 9.2, and pClamp 10.6 data acquisition software (Axon Instruments; Molecular Devices, San Jose, CA, USA). Subsequently, electrical signals were digitized at 20 kHz using a 1320X digitizer or a 1440A digitizer (Axon Instruments; Molecular Devices). The access resistance was monitored throughout the recording (typically between 8 and 20 MΩ), and neurons were discarded if the access resistance was >25 MΩ. Recordings were not corrected for bridge balancing due to the short duration of recording time. We note that stimulus parameters for each recording are not identical across recorded cells, in part due to technical considerations by the experimentalist, for example, to prevent losing the cell recording. Axon binary format to NWB file conversion The x-to-nwb repository was used to convert current clamp recordings in axon binary format (ABF) to NWB format. Separate converters were used for files recorded using pClamp ( RRID:SCR_011323 ) 9.0, which output ABFv1 files, and pClamp >10.0, which output ABFv2 files, to ensure valid conversions while incorporating the essential metadata. The key aspects of our usage of these data conversion computer scripts relate to defining which ABF channels correspond to stimulus and response traces and ensuring that appropriate scale and offset factors are applied properly upon conversion. We incorporated the ndx-dandi-icephys metadata extensions to allow for inclusion of user-defined “Subject” and “Lab” metadata fields to be able to include specific metadata, including “subject_id,” “age,” “species,” “cell_id,” and “tissue_sample_id.” Relevant metadata were recorded in 2 separate tables: first, patient-level information, including demographics and clinical information, and, second, recording specific information, which relates to aspects of each individual cell's recording, such as channels corresponding to stimulus, response, and resting membrane potential. The patient-level demographics table had fields including “Resection date,” “Resection procedure,” “Sex,” “Age,” “Years of seizure history,” “Diagnosis,” “Seizure type,” “Presence of a tumor,” and “Antiepileptic drugs.” Recording specific metadata included experiment “date,” “Cell number” to differentiate recordings from distinct cells taken on the same day, “Cell layer,” “Gain,” “Offset,” “Response channel,” “Command channel,” and “RMP” to record the resting membrane potential at the initial time of recording. Additional recording metadata were extracted directly from ABF files using custom scripts to extract the stimulus start and end times and the stimulus sampling rate. Electrophysiology feature extraction The Intrinsic Physiology Feature Extractor (IPFX) toolbox was used to extract features from converted NWB files . All experiments consisted of long-square hyperpolarizing and depolarizing current injections, and extracted features included subthreshold features (i.e., input resistance, sag ratio), action potential properties (i.e., action potential half-width, threshold time, and voltage) derived from the rheobase spike as well as multiaction potential spike train features derived from the IPFX-defined “hero” sweep (i.e., adaptation index), as described previously . Our included metadata files contain stimulus start and end times along with an IPFX-compatible stimulus description ontology file for reproducibility and to facilitate the feature extraction process. Quality control of contributed neuron recordings We performed both automated and manual quality control checks of converted recordings to ensure dataset quality and maximize reuse potential. Using features automatically extracted via IPFX, we checked whether the baseline voltage of a sweep (i.e., v_baseline) deviated by more than 10 mV from the initial measure in the first current injection step. Any cell recordings that had any sweep deviate beyond the 10-mV threshold were not included in the final contributed dataset. We also included the measures for maximum drift of baseline Vm in each recording's metadata under the field max_drift_Vm. Also, individual recordings were manually inspected at 3 injected current steps (the most hyperpolarizing pulse, the rheobase, and the most depolarizing step). In addition, we further manually inspected each neuron recording's frequency/input curve to identify any abnormal responses and also to identify putative recordings from interneurons. Following this manual inspection process, we note that in some instances, we observed some evidence for spike saturation at higher steps of current injection. We also noted some instances of cells spiking spontaneously (i.e., spiking outside of the window of injected current), but we chose not to reject these sweeps or cells according to our quality control criteria. Statistical analyses To detect statistical differences across experimental groupings, we report results using the 2-sample t -test, Wilcoxon rank-sum test, Kruskal–Wallis rank-sum test, or Pearson correlation using the statistical functions in base R. All statistical tests were performed using R version 4.1.2 . Resected human cortical tissues were obtained from Toronto Western Hospital (University Health Network, Canada). All procedures on human tissue were performed in accordance with the Declaration of Helsinki and approved by the University Health Network Research Ethics board . Patients underwent a standardized temporal, parietal, or frontal lobectomy under general anesthesia using volatile anesthetics for seizure or tumor treatment . Tissue was obtained from patients diagnosed with temporal ( n = 34), frontal ( n = 1), or parietal lobe ( n = 1) epilepsy or brain tumors ( n = 4) in 17 male and 18 female patients, age ranging from 21 to 59 years (mean age ± SD: 40.5 ± 12.0). Written informed consent was obtained from all study participants to use their tissue and to share the acquired data with anonymized demographic information—namely, subject age at time of surgery, sex, years of seizure, seizure frequency, secondarily generalized seizure frequency (using clinical records and epilepsy monitoring unit recordings), antiepileptic drug treatment, and type of seizure. The resected cortical tissue from the temporal lobe–middle temporal gyrus exhibited no structural or functional abnormalities in preoperative magnetic resonance imaging and was considered “relatively healthy” by ourselves and others as it was located outside of the site of epileptogenesis . Cortical tissue from the frontal cortex from patients with epilepsy was considered “epileptogenic” tissue and confirmed with independent electrocorticography (and annotated as such in our metadata). For tumor cases, cortical tissue blocks were obtained from tissue at a distance from the main site of the tumor (i.e., such cortical tissue was not taken directly from the tumor itself). All experimental procedures involving mice were reviewed and approved by the animal care committees of the University Health Network in accordance with the guidelines of the Canadian Council on Animal Care. Mixed male and female wild-type C57Bl/6 J, age postnatal 21 days, were used for experiments. Mice were kept on a 12-hour light/dark cycle and had free access to food and water. Immediately following surgical human cortical resection, the cortical specimens were submerged in an ice-cold (∼4°C) cutting solution that was continuously bubbled with carbogenated (95% O 2 /5% CO 2 ) artificial cerebrospinal fluid (aCSF) containing (in mM) the following: sucrose, 248; KCl, 2; MgSO 4 .7H 2 O, 3; CaCl 2 .2H 2 O, 1; NaHCO 3 , 26; NaH 2 PO 4 .H 2 O, 1.25; and D-glucose, 10. The osmolarity was adjusted to 300–305 mOsm. Transverse brain slices (400 μm) were sectioned using a vibratome (Leica 1200 V) Germany in cutting solution. Tissue slicing was performed perpendicular to the pial surface to help ensure that pyramidal cell dendrites were minimally truncated . The cutting solution was the same as used for transport of tissue from the operating room to the laboratory. The time between tissue resection and slice preparation was less than 10 minutes. After sectioning, the slices were incubated for 30 minutes at 34°C in standard aCSF (in mM): NaCl, 123; KCl, 4; CaCl 2 .2H 2 O, 1; MgSO 4 .7H 2 O, 1; NaHCO 3 , 26; NaH 2 PO 4 .H 2 O, 1.2; and D-glucose, 10, pH 7.40. All aCSF and cutting solutions were continuously bubbled with carbogen gas (95% O 2 –5% CO 2 ) and had an osmolarity of 300–305 mOsm. Following this incubation, the slices were maintained in standard aCSF at 22–23°C for at least 1 hour, until they were individually transferred to a submerged recording chamber. Brain slice preparation was done in a similar way for mice and human tissue. Mice were deeply anesthetized by isoflurane 1.5–3.0%. After decapitation, brains were submerged in (∼4°C) cutting solution that was continuously bubbled with 95% O 2 –5% CO 2 containing (in mM) sucrose, 248; KCl, 2; MgSO 4 .7H 2 O, 3; CaCl 2 .2H 2 O, 1; NaHCO 3 , 26; NaH 2 PO 4 .H 2 O, 1.25; and D-glucose, 10. Mouse somatosensory cortical slices (350 μm) were prepared in the coronal plane similar to human slice preparation as described above. A subset of cortical slices in both human and mouse was prepared using the N-methyl-D-glucamine (NMDG) protective recovery method . The cortical tissue blocks were transferred and sectioned in 2–4 °C NMDG-HEPES aCSF solution containing (in mM) NMDG, 92; KCl, 2.5; NaH 2 PO 4 , 1.25; NaHCO 3 , 30; HEPES, 20; glucose, 25; thiourea, 2; Na–L-ascorbate, 5; Na-pyruvate, 3; CaCl 2 .4H 2 O, 0.5; and MgSO 4 .7H 2 O 10 (mM). The pH of NMDG-HEPES aCSF solution was adjusted to 7.3–7.4 using hydrochloric acid, and the osmolarity was 300–305 mOsm. The cortical slices were prepared using a vibratome as described above. After slicing, slices were transferred to a recovery chamber filled with 32–34 °C NMDG-HEPES aCSF solution, which continuously bubbled with 95% O 2 –5% CO 2 . After 12 minutes, the slices were transferred to an incubation solution—HEPES aCSF—containing (in mM) NaCl, 92; KCl, 2.5; NaH 2 PO 4 .H 2 O, 1.25; NaHCO 3 , 30; HEPES, 20; glucose, 25; thiourea, 2; Na–L-ascorbate, 5; Na-pyruvate, 3; CaCl 2 .4H 2 O, 2; and MgSO 4 .7H 2 O, 2. After a 1-hour incubation at room temperature, slices were transferred to a recording chamber and continuously perfused with aCSF containing (in mM) NaCl, 126; KCl, 2.5; NaH 2 PO 4 .H 2 O, 1.25; NaHCO 3 , 26; glucose, 12.6; CaCl 2 .2H 2 O, 2; and MgSO 4 .7H 2 O 1 (mM) . For electrophysiological recordings, cortical slices were placed in a recording chamber mounted on a fixed-stage upright microscope (Axioskop 2 FS MOT; Carl Zeiss, Germany) Oberkochen, Baden-Württemberg. Slices were continuously perfused with carbogenated (95% O 2 /5% CO 2 ) aCSF containing (in mM) NaCl, 123; KCl, 4; CaCl 2 .2H 2 O, 1.5; MgSO 4 .7H 2 O, 1.3; NaHCO 3 , 26; NaH 2 PO 4 .H 2 O, 1.2; and D-glucose, 10, pH 7.40, at 32–34°C. Cortical neurons were visualized using an IR-CCD camera (IR-1000; MTI, USA) Albany, NY with a 40× water immersion objective. Patch pipettes (3–6 MΩ) were pulled from standard borosilicate glass pipettes (thin-wall borosilicate tubes with filaments; World Precision Instruments, Sarasota, FL, USA) using a vertical puller (PC-10; Narishige) Japan. For somatic recording of electrophysiological properties, patch pipettes were filled with intracellular solution containing (in mM) K-gluconate, 135; NaCl, 10; HEPES, 10; MgCl 2 , 1; Na 2 ATP, 2; and GTP 0.3, pH adjusted with KOH to 7.4 (290–309 mOsm). A subset of data was collected with excitatory (APV 50 μM, Sigma [St. Louis, MO, USA]; CNQX 25 μM, Sigma) and inhibitory (Bicuculline 10 μM, Sigma; CGP-35348 10 μM, Sigma) synaptic activity blocked. Electrical signals were measured with a Multiclamp 700A amplifier, Axopatch 200B amplifier, pClamp 9.2, and pClamp 10.6 data acquisition software (Axon Instruments; Molecular Devices, San Jose, CA, USA). Subsequently, electrical signals were digitized at 20 kHz using a 1320X digitizer or a 1440A digitizer (Axon Instruments; Molecular Devices). The access resistance was monitored throughout the recording (typically between 8 and 20 MΩ), and neurons were discarded if the access resistance was >25 MΩ. Recordings were not corrected for bridge balancing due to the short duration of recording time. We note that stimulus parameters for each recording are not identical across recorded cells, in part due to technical considerations by the experimentalist, for example, to prevent losing the cell recording. The x-to-nwb repository was used to convert current clamp recordings in axon binary format (ABF) to NWB format. Separate converters were used for files recorded using pClamp ( RRID:SCR_011323 ) 9.0, which output ABFv1 files, and pClamp >10.0, which output ABFv2 files, to ensure valid conversions while incorporating the essential metadata. The key aspects of our usage of these data conversion computer scripts relate to defining which ABF channels correspond to stimulus and response traces and ensuring that appropriate scale and offset factors are applied properly upon conversion. We incorporated the ndx-dandi-icephys metadata extensions to allow for inclusion of user-defined “Subject” and “Lab” metadata fields to be able to include specific metadata, including “subject_id,” “age,” “species,” “cell_id,” and “tissue_sample_id.” Relevant metadata were recorded in 2 separate tables: first, patient-level information, including demographics and clinical information, and, second, recording specific information, which relates to aspects of each individual cell's recording, such as channels corresponding to stimulus, response, and resting membrane potential. The patient-level demographics table had fields including “Resection date,” “Resection procedure,” “Sex,” “Age,” “Years of seizure history,” “Diagnosis,” “Seizure type,” “Presence of a tumor,” and “Antiepileptic drugs.” Recording specific metadata included experiment “date,” “Cell number” to differentiate recordings from distinct cells taken on the same day, “Cell layer,” “Gain,” “Offset,” “Response channel,” “Command channel,” and “RMP” to record the resting membrane potential at the initial time of recording. Additional recording metadata were extracted directly from ABF files using custom scripts to extract the stimulus start and end times and the stimulus sampling rate. The Intrinsic Physiology Feature Extractor (IPFX) toolbox was used to extract features from converted NWB files . All experiments consisted of long-square hyperpolarizing and depolarizing current injections, and extracted features included subthreshold features (i.e., input resistance, sag ratio), action potential properties (i.e., action potential half-width, threshold time, and voltage) derived from the rheobase spike as well as multiaction potential spike train features derived from the IPFX-defined “hero” sweep (i.e., adaptation index), as described previously . Our included metadata files contain stimulus start and end times along with an IPFX-compatible stimulus description ontology file for reproducibility and to facilitate the feature extraction process. We performed both automated and manual quality control checks of converted recordings to ensure dataset quality and maximize reuse potential. Using features automatically extracted via IPFX, we checked whether the baseline voltage of a sweep (i.e., v_baseline) deviated by more than 10 mV from the initial measure in the first current injection step. Any cell recordings that had any sweep deviate beyond the 10-mV threshold were not included in the final contributed dataset. We also included the measures for maximum drift of baseline Vm in each recording's metadata under the field max_drift_Vm. Also, individual recordings were manually inspected at 3 injected current steps (the most hyperpolarizing pulse, the rheobase, and the most depolarizing step). In addition, we further manually inspected each neuron recording's frequency/input curve to identify any abnormal responses and also to identify putative recordings from interneurons. Following this manual inspection process, we note that in some instances, we observed some evidence for spike saturation at higher steps of current injection. We also noted some instances of cells spiking spontaneously (i.e., spiking outside of the window of injected current), but we chose not to reject these sweeps or cells according to our quality control criteria. To detect statistical differences across experimental groupings, we report results using the 2-sample t -test, Wilcoxon rank-sum test, Kruskal–Wallis rank-sum test, or Pearson correlation using the statistical functions in base R. All statistical tests were performed using R version 4.1.2 . In Table , we summarize the 3 main axes differentiating the cells and recordings in this dataset. Namely, recordings differed by species (human versus mouse), by cortical layer of the cell body of the recorded neuron (layer 23, layer 3c, and layer 5), and whether synaptic blockers were used in the external recording solution. Additionally, Fig. illustrates the breakdown of various metadata features associated with the various human electrophysiological recordings. Putative interneurons were identified by their action potential characteristics (large maximal firing rates and typically large spike after-hyperpolarization amplitudes) as described in Chameh et al. . One reason why synaptic blockers were used is to make a subset of recordings more consistent with protocols used in other labs, such as the Allen Institute for Brain Sciences . In current clamp mode, hyperpolarizing and depolarizing current injections (600–1,000 ms) were used to characterize biophysical features of cortical neurons, with examples from 3 recorded cells shown in Fig. . In Fig. , we highlight how the use of synaptic blockers in the external solution may affect recorded subthreshold neuronal properties. Specifically, among recorded human L5 neurons, there was a significant difference in the recorded input resistance between neurons recorded following application of synaptic blockers (208 ± 106 MΩ, n = 18) and regular aCSF (80.9 ± 36.6 MΩ, n = 40); t (19) = 4.94, P = 9.37e-05. However, in Fig. , there was no significant effect on the action potential width, t (28) = 1.14, P = 0.265, between neurons recorded following application of synaptics blockers and regular aCSF. Similarly, in Fig. , there was no detectable effect on the average firing rate of the cell at the IPFX-defined “hero” sweep, t (40) = 0.259, P = 0.797, between the same groups. To illustrate comparisons across species, in Figs. , , and , we show distributions of the input resistance, Action Potential (AP) width, and average firing rate of the “hero” sweep respectively recorded from neurons in both human and mouse cortical L5 neurons (in the presence of synaptic blockers). We did not detect a significant difference in the input resistance, t (11) = 1.32, P = 0.210, or average firing rate, t (25) = 0.0709, P = 0.944, observed between the recordings from the 2 species. However, when comparing the width of APs from recordings in human neurons (2.25 ± 0.890 ms, n = 18) and mouse neurons (1.55 ± 0.783, n = 11), there was a significant difference detected, t (23) = 2.23, P = 0.0355. To compare the effect of solution used for the brain slice preparation on intrinsic properties, we compared the input resistance and sag ratio recorded following preparation in either solution. In Fig. , we highlight a significant difference, t (16) = 2.52, P = 0.0224, of higher measured input resistance in the recordings made following preparation in the NMDG (266 ± 108 MΩ, n = 12) recovery solution compared to the sucrose solution (179 ± 75.4 MΩ, n = 25). In Fig. , we compare the sag ratio across the same conditions and observe no significant difference across the brain slice preparations, t (13) = 0.317, p = 0.756. The statistical comparisons made in Fig. were made after grouping all recordings from L23, L3C, and L5 using standard aCSF. These comparisons emphasize the potential importance of the conditions used for the experimental preparation (see Discussion). To illustrate the rich diversity of the metadata for each of the human recordings, in Fig. , we highlight specific comparisons of input resistance and sag ratio measurements recorded in regular aCSF across demographic conditions. Specifically, we focus on the input resistance as a fundamental passive electrophysiological property and the sag ratio as an active property that has previously been used to distinguish between subtypes of human neurons . In Figs. , , we compare distributions of these electrophysiological features across the 3 different brain lobes from which neuronal tissue was resected. Kruskal–Wallis rank-sum test was used to examine whether brain lobe resection location has a significant effect on input resistance or measured sag ratio. No significant differences in input resistance (χ 2 = 2.7968, df = 2, P = 0.247) or sag ratio (χ 2 = 3.50, df = 2, P = 0.174) were found across the 3 resected locations. In Figs. , , we compare the electrophysiological feature distributions measured in male and female patients. We did not detect any differences between recordings from male or female patients in input resistance, t (70) = 1.38, P = 0.172, or sag ratio, t (70) = 0.0644, P = 0.949. Note that all cells from frontal and parietal cortices were recorded from tissue resected near the site of the epileptogenic focus, whereas all cells from the temporal cortex were recorded distal from the epileptogenic focus (with the exception of 1 subject). Additionally, we illustrate the relationship of the input resistance and sag ratio against both patient age at time of surgical resection (Figs. , ) and years of seizure experienced by the patient prior to the surgical intervention (Figs. , ). Application scenarios The recordings in this database permit the quantification of biophysical properties from a diverse set of neurons, including human and mouse neurons with a well-described set of metadata. Independent variables collected include age, sex, seizure history, and cortical layer from which the tissue was resected. Additionally, experiments on the human neurons were performed with the use of synaptic blockers and without, allowing for comparisons and integration with other intrinsic electrophysiological databases comprising patch-clamp recordings, including from the Allen Cell Types Database ( RRID:SCR_014806 ). These data from current-clamp experiments are particularly beneficial for the development of conductance-based models of human neurons . In particular, we highlight that in some instances, it may be more suitable to constrain biophysical models to human data in the absence of synaptic blockers, that is to say, when background synaptic activity is having a significant effect on input resistance measurements. The voltage responses can be used as a training set to constrain biophysical models when combined and integrated with other publicly available databases that provide relevant morphologies and channel kinetics, such as NeuroMorpho.org, Channelpedia, or ICGenealogy . Moreover, fitting biophysical models to data that are grouped based on demographic information can allow for cross-group comparisons using in silico approaches. Usage of these models in neuronal or circuit simulations can thus help to further predict and unveil the potential effects of differences in neuronal physiology across demographic groupings . Discussion and Limitations The repository provided is focused mainly on cortical neurons derived from human tissue. There are comparatively fewer recordings for analyses of mouse neuron function provided, and all of these were performed using synaptic blockers that were shown to have a baseline effect on input resistance. The recordings from human specimens derived from tissue during the surgical resection of diseased tissue for patients with intractable epilepsy or brain tumors. Along with having suffered seizures for an extended period of time, the patients may have concurrently taken 1 or a variety of antiepileptic drugs that could have affected baseline neuronal excitability characteristics. While these data were collected for the purpose of characterizing intrinsic properties of human neocortical neurons, we note that they were collected using different sets of experimental conditions, including those related to different recording solutions as well as cutting solutions. Our analyses suggest such experimental condition differences likely contribute to differences in downstream electrophysiological properties and are consistent with prior analyses by ourselves and others . For example, the observed effect of synaptic blockers on the input resistance may be due to reduction of overall membrane permeability as a consequence of the block of both excitatory and inhibitory conductances . However, we did not observe a concurrent change in excitability characteristics such as the AP width, in agreement with previous findings that did not find a significant effect of synaptic blockers on AP characteristics or neuronal passive properties . In contrast, the comparisons of electrophysiological measures following different cutting solutions highlight potential effects on neuronal excitability. We observe variability in the measured input resistance but consider that these effects may be due to changes in conductivity across the membrane or also experimental biases in selection of healthy neurons for patch-clamp protocol due to differential response to solutions of different osmolarity. Furthermore, there exists contrasting results in the literature on the effects of NMDG on neuronal excitability and synaptic transmission, which shows the context dependence of the many experimental variables . Taken together, such potential differences in electrophysiological characteristics due to experimental conditions are important to consider when reusing these data in downstream analyses. The recordings in this database permit the quantification of biophysical properties from a diverse set of neurons, including human and mouse neurons with a well-described set of metadata. Independent variables collected include age, sex, seizure history, and cortical layer from which the tissue was resected. Additionally, experiments on the human neurons were performed with the use of synaptic blockers and without, allowing for comparisons and integration with other intrinsic electrophysiological databases comprising patch-clamp recordings, including from the Allen Cell Types Database ( RRID:SCR_014806 ). These data from current-clamp experiments are particularly beneficial for the development of conductance-based models of human neurons . In particular, we highlight that in some instances, it may be more suitable to constrain biophysical models to human data in the absence of synaptic blockers, that is to say, when background synaptic activity is having a significant effect on input resistance measurements. The voltage responses can be used as a training set to constrain biophysical models when combined and integrated with other publicly available databases that provide relevant morphologies and channel kinetics, such as NeuroMorpho.org, Channelpedia, or ICGenealogy . Moreover, fitting biophysical models to data that are grouped based on demographic information can allow for cross-group comparisons using in silico approaches. Usage of these models in neuronal or circuit simulations can thus help to further predict and unveil the potential effects of differences in neuronal physiology across demographic groupings . The repository provided is focused mainly on cortical neurons derived from human tissue. There are comparatively fewer recordings for analyses of mouse neuron function provided, and all of these were performed using synaptic blockers that were shown to have a baseline effect on input resistance. The recordings from human specimens derived from tissue during the surgical resection of diseased tissue for patients with intractable epilepsy or brain tumors. Along with having suffered seizures for an extended period of time, the patients may have concurrently taken 1 or a variety of antiepileptic drugs that could have affected baseline neuronal excitability characteristics. While these data were collected for the purpose of characterizing intrinsic properties of human neocortical neurons, we note that they were collected using different sets of experimental conditions, including those related to different recording solutions as well as cutting solutions. Our analyses suggest such experimental condition differences likely contribute to differences in downstream electrophysiological properties and are consistent with prior analyses by ourselves and others . For example, the observed effect of synaptic blockers on the input resistance may be due to reduction of overall membrane permeability as a consequence of the block of both excitatory and inhibitory conductances . However, we did not observe a concurrent change in excitability characteristics such as the AP width, in agreement with previous findings that did not find a significant effect of synaptic blockers on AP characteristics or neuronal passive properties . In contrast, the comparisons of electrophysiological measures following different cutting solutions highlight potential effects on neuronal excitability. We observe variability in the measured input resistance but consider that these effects may be due to changes in conductivity across the membrane or also experimental biases in selection of healthy neurons for patch-clamp protocol due to differential response to solutions of different osmolarity. Furthermore, there exists contrasting results in the literature on the effects of NMDG on neuronal excitability and synaptic transmission, which shows the context dependence of the many experimental variables . Taken together, such potential differences in electrophysiological characteristics due to experimental conditions are important to consider when reusing these data in downstream analyses. ABF: axon binary format; aCSF: artificial cerebrospinal fluid; DANDI: Distributed Archives for Neurophysiology Data Integration; IPFX: Intrinsic Physiology Feature Extractor; NMDG: N-methyl-D-glucamine; NWB: Neurodata Without Borders. Both mouse and human data are available on the DANDI platform . Conversion, analysis scripts, and other associated metadata for recordings are available at GitHub . All other supporting data and materials are available in the GigaScience GigaDB database Derek Howard Conceptualization, Project Administration, Formal Analysis, Software, Investigation, Validation, Data Curation, Writing—Original Draft Preparation, Writing—Review & Editing, Visualization Homeira Moradi Chameh Project Administration, Investigation, Data Curation, Writing—Review & Editing Alexandre Guet-McCreight Investigation, Visualization Huan Allen Hsiao Software, Investigation Maggie Vuong Software, Investigation Young Seok Seo Software, Investigation Prajay Shah Investigation Anukrati Nigam Investigation Yuxiao Chen Investigation Melanie Davie Investigation Etay Hay Writing—Review & Editing  Taufik A Valiante Supervision, Resources, Funding Acquisition, Writing—Review & Editing Shreejoy Tripathy Conceptualization, Supervision, Methodology, Validation, Resources, Funding Acquisition, Writing—Original Draft Preparation , Writing—Review & Editing , Visualization. giac108_GIGA-D-22-00068_Original_Submission giac108_GIGA-D-22-00068_Revision_1 giac108_GIGA-D-22-00068_Revision_2 giac108_Response_to_Reviewer_Comments_Original_Submission giac108_Response_to_Reviewer_Comments_Revision_1 giac108_Reviewer_1_Report_Original_Submission Koen Kole -- 5/11/2022 Reviewed giac108_Reviewer_2_Report_Original_Submission Nathan Gouwens -- 5/13/2022 Reviewed giac108_Reviewer_2_Report_Revision_1 Koen Kole -- 8/23/2022 Reviewed
Urgent Transcatheter Mitral Edge‐to‐Edge Repair Is Associated With Worse in‐Hospital Outcomes: A Nationwide Analysis
e0999aff-0af9-45ba-88e7-b8ae649c7bea
11892689
Surgical Procedures, Operative[mh]
Introduction Mitral regurgitation (MR) is one of the most common valvular heart diseases in the general population , particularly in patients with heart failure . MR is a factor associated with poor prognosis, with increased mortality and hospital readmissions, especially in the presence of reduced left ventricular ejection fraction . Transcatheter mitral valve edge‐to‐edge repair (TEER) has emerged as a promising alternative to surgery for patients with severe primary or secondary MR and favorable valve anatomy, particularly those considered high risk or ineligible for conventional surgery due to advanced age, significant comorbidities or anatomical constraints . This lesser invasive approach involves the percutaneous placement of a mitral valve clip to reduce regurgitation by approximating the mitral valve leaflets . While the majority of TEER procedures are planned and performed electively to optimize patient selection and preparation , there are scenarios where urgent intervention is required, such as acute decompensation leading to severe MR‐related symptoms, acute exacerbation of heart failure or hemodynamic instability . Despite the increasing use of TEER, there is a paucity of studies in the literature specifically addressing the outcomes of urgent TEER procedures. Previous research has highlighted the poorer outcomes associated with urgent invasive cardiovascular procedures compared to elective procedures . However, the unique considerations and outcomes associated with urgent TEER remain less explored. Using a robust data set that includes a wide range of patient demographics, clinical characteristics and procedural details, we aim to elucidate the impact of urgency on TEER outcomes. Therefore, our study aimed to compare in‐hospital outcomes in patients undergoing urgent versus non‐urgent TEER using a contemporary nationwide database. Methods We conducted a retrospective study using the National Inpatient Sample (NIS) database during the period 2016−2019. The NIS is a publicly available database of the Health Care Utilization Project, which contains data for 20% of discharge records from community hospitals across the United States. Admissions of adults who underwent in‐hospital TEER using the appropriate ICD‐10 procedure codes were included (Supporting Information S1: Table ). Admissions with missing data for covariates were excluded. Patients were divided into two groups, urgent and non‐urgent TEER, for comparison. For the definition of the type of TEER procedure (urgent vs. non‐urgent), we used the variable “ELECTIVE” from the NIS database which has two categories: non‐elective versus elective admission. The primary outcome was in‐hospital mortality, and the secondary outcomes were cardiogenic shock, pulmonary artery catheterization, intra‐aortic balloon pump (IABP), percutaneous ventricular assist device (PVAD), extracorporeal membrane oxygenation (ECMO), renal replacement therapy, mechanical ventilation, acute stroke, major bleeding, pericardial complication, length of hospital stay, and total charges (Supporting Information S1: Table ). Sociodemographic characteristics, comorbidities (based on Elixhauser Comorbidity Index), and hospital characteristics were reported. Categorical variables were expressed as frequencies and percentages and continuous variables as median (interquartile range [IQR]). A chi‐squared test with Rao & Scott's second‐order correction and Wilcoxon rank‐sum test were used to compare categorical and continuous variables, respectively. Inverse probability of treatment weighting (IPTW) was used to assess the differences between urgent and non‐urgent groups, balancing demographics, comorbidities, and hospital characteristics. The balance of baseline covariates was compared using the standardized mean difference (cut‐off < 0.1 for appropriate balance) (Supporting Information S1: Figure ). A log‐binomial model was used to estimate the adjusted risk ratios (aRR) with their 95% confidence intervals (CI). In addition, we performed a trend analysis on cases of TEER and urgent admissions, examining quarterly data each year using the Cochran‐Armitage Trend Test. The R 4.3.2 software was used for all analyses, considering a two‐tailed p < 0.05 as statistically significant. Results 3.1 Demographics and Characteristics This study involved 30 390 weighted admissions of adults who underwent TEER. Of these, 29 730 were included in the final analysis, with 6425 (21.6%) classified as urgent admissions (Figure ). The median age of the cohort was 79 years (IQR 71−85), with 45.8% being female and 77.7% identifying as white (Table ). The most prevalent comorbidities among this cohort were hypertension (81.8%), atrial fibrillation (59.4%), and dyslipidemia (58.6%). Patients with urgent admissions exhibited a higher comorbidity burden, particularly in extracardiac conditions such as renal failure (50.4% vs. 34.2%), diabetes (32.1% vs. 24%), and chronic pulmonary disease (31.1% vs. 22.3%), all of which were statistically significant differences ( p < 0.001). The median Elixhauser Comorbidity Index was also higher in the urgent admission group (7.00 vs. 5.00, p < 0.001). 3.2 Procedures and Outcomes The median length of stay for all admissions was 2 days (IQR 1−4), with a majority of procedures conducted in large urban teaching hospitals. Notably, a higher proportion of urgent admissions were transferred to another hospital (21.2% vs. 6.1%, p < 0.001) (Table ). The overall in‐hospital mortality rate for TEER admissions was 1.8%, with urgent admissions demonstrating a significantly higher in‐hospital mortality rate after IPTW adjustment (aRR 3.67, 95% CI 2.39–5.62) compared to non‐urgent admissions (Table , Table , and Figure ). Similarly, patients admitted urgently were at a higher risk of developing cardiogenic shock (aRR 4.95, 95% CI 3.73−6.57), acute stroke (aRR 2.56, 95% CI 1.32−4.97), and compared to non‐urgent admissions, in‐hospital cardiac arrest (aRR 2.25, 95% CI 1.08−4.69) and major bleeding (aRR 5.18, 95% CI 2.97−9.06) were also more frequent among urgent admissions (Table and Figure ). Furthermore, the utilization of invasive procedures was more common among urgent‐TEER patients, including IABP (aRR 3.97, 95% CI 2.53−6.23), PVAD (aRR 17.24, 95% CI 6.37−46.66), and mechanical ventilation (aRR 3.79, 95% CI 2.80−5.11) (Figure ). No significant difference was observed between the two groups with respect to renal replacement therapy and pericardial complications (Table and Figure ). Urgent admissions were associated with longer median length of stay (median 6 vs. 2 days, p < 0.001) and higher total costs (median $229 160 vs. $164 653, p < 0.01) compared to non‐urgent admissions (Table ). A statistically significant increase in the utilization of TEER was observed over time ( p < 0.001), while the proportion of urgent admissions remained unchanged across the study period ( p = 0.652) (Supporting Information S1: Figure ). Likewise, there was no temporal change in the length of hospital stay ( p = 0.425) and total charges ( p = 0.950) (Supporting Information S1: Table ). Demographics and Characteristics This study involved 30 390 weighted admissions of adults who underwent TEER. Of these, 29 730 were included in the final analysis, with 6425 (21.6%) classified as urgent admissions (Figure ). The median age of the cohort was 79 years (IQR 71−85), with 45.8% being female and 77.7% identifying as white (Table ). The most prevalent comorbidities among this cohort were hypertension (81.8%), atrial fibrillation (59.4%), and dyslipidemia (58.6%). Patients with urgent admissions exhibited a higher comorbidity burden, particularly in extracardiac conditions such as renal failure (50.4% vs. 34.2%), diabetes (32.1% vs. 24%), and chronic pulmonary disease (31.1% vs. 22.3%), all of which were statistically significant differences ( p < 0.001). The median Elixhauser Comorbidity Index was also higher in the urgent admission group (7.00 vs. 5.00, p < 0.001). Procedures and Outcomes The median length of stay for all admissions was 2 days (IQR 1−4), with a majority of procedures conducted in large urban teaching hospitals. Notably, a higher proportion of urgent admissions were transferred to another hospital (21.2% vs. 6.1%, p < 0.001) (Table ). The overall in‐hospital mortality rate for TEER admissions was 1.8%, with urgent admissions demonstrating a significantly higher in‐hospital mortality rate after IPTW adjustment (aRR 3.67, 95% CI 2.39–5.62) compared to non‐urgent admissions (Table , Table , and Figure ). Similarly, patients admitted urgently were at a higher risk of developing cardiogenic shock (aRR 4.95, 95% CI 3.73−6.57), acute stroke (aRR 2.56, 95% CI 1.32−4.97), and compared to non‐urgent admissions, in‐hospital cardiac arrest (aRR 2.25, 95% CI 1.08−4.69) and major bleeding (aRR 5.18, 95% CI 2.97−9.06) were also more frequent among urgent admissions (Table and Figure ). Furthermore, the utilization of invasive procedures was more common among urgent‐TEER patients, including IABP (aRR 3.97, 95% CI 2.53−6.23), PVAD (aRR 17.24, 95% CI 6.37−46.66), and mechanical ventilation (aRR 3.79, 95% CI 2.80−5.11) (Figure ). No significant difference was observed between the two groups with respect to renal replacement therapy and pericardial complications (Table and Figure ). Urgent admissions were associated with longer median length of stay (median 6 vs. 2 days, p < 0.001) and higher total costs (median $229 160 vs. $164 653, p < 0.01) compared to non‐urgent admissions (Table ). A statistically significant increase in the utilization of TEER was observed over time ( p < 0.001), while the proportion of urgent admissions remained unchanged across the study period ( p = 0.652) (Supporting Information S1: Figure ). Likewise, there was no temporal change in the length of hospital stay ( p = 0.425) and total charges ( p = 0.950) (Supporting Information S1: Table ). Discussion In this nationwide study, we found that urgent admissions represented nearly a quarter of all cases of patients undergoing TEER. Inpatients undergoing urgent TEER had a higher risk of in‐hospital mortality, an increased requirement of mechanical circulatory support, and other in‐hospital complications, along with a higher utilization of hospital resources. MR plays a significant role in the setting of acute decompensated heart failure, both in terms of its frequency and its impact on prognosis . Given the relevance of MR and the risk profile of patients with acute HF, the approach of performing a less invasive therapeutic option such as TEER is of interest . Our study has documented an increased risk of adverse clinical events associated with the performance of TEER procedures in an emergency setting. Similar results were observed in patients undergoing cardiac surgery in terms of a worse outcome of urgent interventions. In this urgent group, a previous study found that those who received surgical mitral valve repair had a somewhat lower risk of mortality and complications than those who underwent mitral valve replacement surgery . The question remains as to whether the prognosis of these patients can truly be improved with TEER in this emergency setting compared to medical treatment alone . Previous studies have shown mixed results for urgent versus non‐urgent TEER in the short‐ and long‐term . Al‐khadra et al. used administrative data and found no significant differences in in‐hospital mortality (4.4% vs. 2.8%, p = 0.051) and cardiac complications between the two groups after propensity score matching . Similarly, a study conducted in Spain on 85 patients with degenerative and functional MR reported no differences in mortality, MR reduction, and improvement in the New York Heart Association class at 30 days between urgent and non‐urgent TEER . Furthermore, mortality was similar between both groups at a 2‐year follow‐up (17.6% vs. 25.1%, p = 0.864). In contrast, in a more recent NIS cohort (2016−2017), in‐hospital mortality was found to be significantly higher in urgent versus non‐urgent TEER (4.5% vs. 1.6%, p < 0.001) . Overall, these discrepancies can be explained by the use of different ICD‐10 codes to define the TEER, the type of analysis employed (crude vs. confounder‐adjusted), the study period considering the recent approval of the MitraClip device and the experience of the operators performing the procedure. It is reasonable to assume that patients undergoing urgent TEER present a higher‐risk profile compared to those with non‐urgent indications . This higher‐risk profile likely encompasses multiple factors beyond comorbidities alone. The observed increase in mortality and complication rates among patients undergoing urgent TEER may be attributed to the more severe clinical status at the time of intervention. We found that patients requiring urgent TEER often present with acute decompensation of heart failure and significant hemodynamic instability with cardiogenic shock in 12%, necessitating prompt intervention. These clinical factors, including the severity of MR, left ventricular dysfunction, and associated comorbidities may impact procedural outcomes . Factors such as hemodynamic instability, organ dysfunction, and the need for mechanical circulatory support can contribute to increased peri‐procedural risks and post‐procedural complications . Clinical Implications It is essential to optimize the clinical situation of patients before the TEER procedure, if possible, particularly in those with urgent admissions. A recent study observed that in patients with MR following acute myocardial infarction, cardiogenic shock was not a factor associated with a worse outcome. This work highlighted the importance of achieving hemodynamic stability as a primary goal before TEER, if this is feasible . Also, a further study demonstrated that patients with cardiogenic shock and MR who underwent TEER exhibited acceptable survival and procedural success . However, it is highly advisable to exercise caution when interpreting these results, as they are likely to be selected cases. Our study highlights the importance of robust risk stratification tools to identify patients at higher risk of adverse outcomes when undergoing urgent TEER. Factors such as comorbidities, hemodynamic stability, and severity of MR should be carefully evaluated to guide treatment decisions and improve patient's optimization before and after TEER . The optimal timing of interventions is a crucial consideration. The decision to perform TEER urgently versus electively should be informed by a comprehensive assessment of individual patient characteristics, including the severity of symptoms, hemodynamic status, and overall clinical stability. Balancing the potential benefits of early intervention with the inherent risks associated with urgent procedures is crucial. A multidisciplinary approach involving cardiologists, cardiac surgeons, and other specialists is often required for the management of patients with severe MR . It is crucial that these teams work together to conduct a thorough risk assessment, develop an effective treatment plan, and provide comprehensive care following the procedure . The majority of TEER procedures were conducted in large urban teaching hospitals, indicating a concentration of specialized care in these settings. Interestingly, while there was an overall increase in TEER utilization over time, the proportion of urgent admissions remained stable, suggesting consistent patient selection criteria for urgent interventions. It should be noted that our study has certain limitations, primarily due to the retrospective design and reliance on administrative data. There is a possibility that unmeasured confounding variables may have influenced our findings. Furthermore, important clinical parameters such as MR etiology, echocardiographic or hemodynamic data, specific TEER implant characteristics, and medication usage during hospitalization were not captured in our analysis. Additionally, the short‐term nature of our study precludes assessment of long‐term outcomes. Conclusions In conclusion, adult inpatients undergoing urgent TEER implantation had an increased risk of in‐hospital death and other short‐term complications. However, prospective multicenter studies evaluating long‐term outcomes are required to guide the care of patients with severe MR requiring urgent intervention. Carlos Diaz‐Arocutipa involved in concept/design. Carlos Diaz‐Arocutipa involved in data acquisition. Carlos Diaz‐Arocutipa, Cesar Joel Benites‐Moya, Javier Torres‐Valencia, Adhya Mehta, and Lourdes Vicent involved in data analysis/interpretation. Carlos Diaz‐Arocutipa and Lourdes Vicent drafted the article. Cesar Joel Benites‐Moya, Javier Torres‐Valencia, and Adhya Mehta critically revised the article. Carlos Diaz‐Arocutipa, CBM, Javier Torres‐Valencia, Adhya Mehta, and Lourdes Vicent approved the article. Not applicable because only information from published studies was used. The authors declare no conflicts of interest. Supporting information.
Exploring the impact of fulvic acid and humic acid on heavy metal availability to alfalfa in molybdenum contaminated soil
dea4d226-a6ec-4ef1-b846-dcdc6e655de9
11686242
Microbiology[mh]
The levels of heavy metal pollution in soils have been significantly elevated due to human activities, such as industrial processes, mining, and agricultural practices, posing substantial risks to both human health and the environment , . This contamination is particularly concerning because of its potential to enter the food chain, affecting both ecological systems and human populations , . Consequently, there is an urgent need for cost-effective and sustainable methods to remediate heavy metal-contaminated soils. Currently, various approaches have been explored for this purpose. For example, modified biochar has shown promising results in stabilizing heavy metals such as Cd, Pb, Cu, and Zn, effectively reducing their uptake by wheat seedlings and improving soil properties . Another common strategy involves the addition of EDTA to soil, which enhances the dissolution of Pb and facilitates its absorption and translocation in bamboo plants . Additionally, the use of mixed chelating agents has been successful in removing Cu and Pb from contaminated agricultural soil . With agents like DGPA, EDDS, and iron nanoparticles being frequently utilized – . However, many of these modifiers can lead to secondary pollution or are prohibitively expensive. Therefore, there is a pressing need to identify more affordable and environmentally benign alternatives for soil remediation. Humic substances (HS) represent a promising solution to this problem. HS are large, stable polymers found in natural soil and aquatic systems, formed through the physical, chemical, and microbial decomposition of plant and animal residues. These complex structures contain a wide array of active functional groups that play a crucial role in the transformation, mobility, and bioavailability of heavy metals in the soil . HS are typically classified into three main components based on their solubility: humin (HM), humic acid (HA), and fulvic acid (FA) . These components differ in molecular weight, functional groups content, and elemental composition. HA, for instance, generally has a molecular weight ranging from 50 to 100 kDa, while FA has a much smaller molecular weight, typically between 0.5 and 2 kDa . Numerous studies have demonstrated the beneficial effects of FA and HA in enhancing soil functions and mitigating heavy metal toxicity – . For example, research has shown that different concentrations of HA can reduce the mobilization, root uptake, and phytoaccumulation of heavy metals in cadmium-contaminated radishes . The addition of FA to soils contaminated with Pb and Cd has also been found to enhance the stability of these metals . Furthermore, the application of FA in wastewater irrigation of wheat has proven effective in reducing Cr toxicity, promoting plant growth, increasing biomass, and enhancing photosynthetic pigments such as chlorophyll, while also alleviating oxidative stress, lipid peroxidation, and Cr accumulation in stressed plants . Additionally, FA and HA significantly increases the accumulation of Cd in plants, with concentrations reaching 2.17 and 2.78 times those of the control treatment, respectively . These findings highlight the significant role of HS in influencing the behavior of heavy metals in various environmental systems. Molybdenum (Mo), while an essential trace element for plant growth , , can become an environmental hazard when its concentration in soil exceeds 5 × 10 − 6 in aqueous solutions, with toxicity levels falling between those of Zn(II) and Cr(III) compounds . Although Mo is necessary for normal plant growth and development, excessive amounts can result in chlorosis and yellowing of leaves , . In humans, excessive Mo intake can lead to health issues, including diarrhea and anemia , . The normal concentration of Mo in agricultural soil typically ranges from 0.8 to 3.3 mg/kg 21 . However, studies have shown that in the Luoyang mining area, Mo concentrations in the soil ranged from 108.13 to 268.13 mg/kg, for exceeding the typical levels found in farmland soil . Alfalfa, a leguminous plant known for its rapid growth, substantial biomass, and high adaptability, has been extensively studied for its potential to remediate soils contaminated with heavy metals such as Cd, Zn, and Cu. Its ability to effectively mitigate soil pollution makes it an ideal candidate for this study. Therefore, in this research, we aim to investigate the potential of natural, pollution-free humic acid in combination with alfalfa for remediating Mo-contaminated soils. Specifically, we seek to: (1) evaluate the impact of FA and HA on the bioavailability of Mo under pot culture conditions. (2) examine the bioavailability, phytoextraction, and distribution of heavy metals, and (3) analyze the responses of the soil bacterial community to the presence of FA and HA. By addressing these objectives, we aim to gain insights into the potential of combining alfalfa and HS for remediating mining-polluted soil, with a specific focus on Mo bioavailability, heavy metal uptake, distribution, and their effects on soil bacterial communities. Experimental designs For experimental purposes, soil samples were collected from agricultural land near a mining-impacted area in Luoyang city, Henan Province, China (coordinates: E111°29.294′, N33°48.829′), containing 17.00 mg/kg of Mo. The initial soil characteristics included a cation exchange capacity (CEC) of 26.79 cmol + /kg, available phosphorus (AP) at 146.79 mg/kg, rapidly-available potassium (AK) at 90.77 mg/kg, ammonium nitrogen (AN) at 12.62 mg/kg, and a pH of 7.42. To prepare for experimentation, the samples were air-dried, cleaned of debris, crushed and sieved using 2 mm nylon sieves for pot experiment and 100-mesh sieves for microwave clean-up to ensure consistent particle size. The FA and HA used in this study were obtained from Shanghai Yuanye Bio-Technology Co., Ltd Shanghai, China, and applied at three levels: 0.1%, 0.5%, and 1% (g/g). The control treatment (CK) contained no FA or HA. A total of 7 treatments with 3 replicates each were established CK, FA0.1, FA0.5, FA1, HA0.1, HA0.5, and HA1. For each treatment, 3 kg of soil was mixed with the designated amount of FA or HA to ensure homogeneity. After mixing, all pots were incubated at room temperature, and soils were stabilized with FA and HA for three days prior to sowing alfalfa seeds. Uniform alfalfa seeds were than planted in each pot, allowing for plant growth and development under controlled experimental conditions. At the end of the 60-day pot experiment, the alfalfa plants were harvested, with shoots and roots separated and washed thoroughly with tap and deionized water to remove surface contaminants. The samples were then dried at 80°C for further analysis. Rhizosphere soil was collected from each pot and divided into two portions: One air-dried at room temperature to analyze soil properties (pH, available phosphorus (AP), ammonium nitrogen (AN), rapidly-available potassium (AK), cation exchange capacity (CEC)), and heavy metal content. And the other stored at -80°C to preserve the soil bacterial community for microbiome analysis. This dual approach facilitated a comprehensive examination of both soil physicochemical properties and microbial diversity within the rhizosphere soil. Analytical methods CEC was measured following the hexamminecobalt trichloride solution-spectrophotometric method, as specified in the HJ 889–2017 standard of China. For the analysis of AN, AP, and AK in the acidic soil, the universal extract-colorimetry method specified in the NY/T 1849–2010 standard of China was utilized, ensuring precise quantification of these nutrient components in the soil samples. To assess heavy metal distribution within the soil, the modified European Community Bureau of Reference (BCR) method was employed, as described in previous studiestal forms were categorized into four fractions : exchangeable fraction (F1), reducible fraction (F2), oxidizable fraction (F3), and residual fraction (F4). To determine total metals content in alfalfa shoots, roots, and residual fraction, a digestion process was conducted using a mixture of HNO 3 , HCl, and HF. Soil samples (0.1 g each) were digested in a microwave oven (ETHOS UP, Milestone, Italy). And the resulting solution was diluted to a final volume of 100 ml and filtered through a 0.45 μm membrane. Additional procedural details are provided in the supplementary materials. The concentration of molybdenum (Mo) was measured using an atomic absorption spectrophotometer (TAS-990 SUPER AFG, China), allowing for accurate quantification of Mo content in the samples. Bioconcentration factor (BCF) is calculated with Eq.  . 1 [12pt]{minimal} $$\:=\:\:\:/}{\:\:\:}$$ Translocation factor (TF) is used to evaluate the ability of heavy metals to transfer within plants . 2 [12pt]{minimal} $$\:=\:\:\:}{\:\:\:}$$ The primer set used for the PCR amplification consisted of 338 F (ACTCCTACGGGAGGCAGCAG) and 806R (GGACTACHVGGGTWTCTAAT). The PCR conditions involved an initial denaturation step at 95 °C for 3 min, followed by 27 cycles of denaturation at 95 °C for 30 s, annealing at 55 °C for 30 s, extension at 72 °C for 45 s, and a final extension step at 72 °C for 10 min. Subsequently, the microbial community analysis in the soil was carried out using Illumina MiSeq sequencing. This sequencing technique, performed by Shanghai Majorbio Bio-Pharm Technology Co., Ltd. in Shanghai, China, allowed for the generation of high-quality sequence data for further analysis and interpretation of the microbial composition in the soil samples. Detailed instructions are in the supplementary materials. Statistical analysis To ensure the reliability and accuracy of the study, rigorous measures were implemented for quality assurance and quality control. Duplicate samples, standard reference samples, and control treatments were utilized to validate the results. The recoveries of the chemical fractions of Mo and the total Mo were within the range of 90–105%, indicating the precision of the analytical methods employed. To account for variability, all tests were conducted in triplicate, with a standard deviation of less than 5%, ensuring consistency and reliability. The obtained results were then averaged to provide representative values. For clear and visually appealing graphical representations, all diagrams in this article were generated using Origin 2023b software. For experimental purposes, soil samples were collected from agricultural land near a mining-impacted area in Luoyang city, Henan Province, China (coordinates: E111°29.294′, N33°48.829′), containing 17.00 mg/kg of Mo. The initial soil characteristics included a cation exchange capacity (CEC) of 26.79 cmol + /kg, available phosphorus (AP) at 146.79 mg/kg, rapidly-available potassium (AK) at 90.77 mg/kg, ammonium nitrogen (AN) at 12.62 mg/kg, and a pH of 7.42. To prepare for experimentation, the samples were air-dried, cleaned of debris, crushed and sieved using 2 mm nylon sieves for pot experiment and 100-mesh sieves for microwave clean-up to ensure consistent particle size. The FA and HA used in this study were obtained from Shanghai Yuanye Bio-Technology Co., Ltd Shanghai, China, and applied at three levels: 0.1%, 0.5%, and 1% (g/g). The control treatment (CK) contained no FA or HA. A total of 7 treatments with 3 replicates each were established CK, FA0.1, FA0.5, FA1, HA0.1, HA0.5, and HA1. For each treatment, 3 kg of soil was mixed with the designated amount of FA or HA to ensure homogeneity. After mixing, all pots were incubated at room temperature, and soils were stabilized with FA and HA for three days prior to sowing alfalfa seeds. Uniform alfalfa seeds were than planted in each pot, allowing for plant growth and development under controlled experimental conditions. At the end of the 60-day pot experiment, the alfalfa plants were harvested, with shoots and roots separated and washed thoroughly with tap and deionized water to remove surface contaminants. The samples were then dried at 80°C for further analysis. Rhizosphere soil was collected from each pot and divided into two portions: One air-dried at room temperature to analyze soil properties (pH, available phosphorus (AP), ammonium nitrogen (AN), rapidly-available potassium (AK), cation exchange capacity (CEC)), and heavy metal content. And the other stored at -80°C to preserve the soil bacterial community for microbiome analysis. This dual approach facilitated a comprehensive examination of both soil physicochemical properties and microbial diversity within the rhizosphere soil. CEC was measured following the hexamminecobalt trichloride solution-spectrophotometric method, as specified in the HJ 889–2017 standard of China. For the analysis of AN, AP, and AK in the acidic soil, the universal extract-colorimetry method specified in the NY/T 1849–2010 standard of China was utilized, ensuring precise quantification of these nutrient components in the soil samples. To assess heavy metal distribution within the soil, the modified European Community Bureau of Reference (BCR) method was employed, as described in previous studiestal forms were categorized into four fractions : exchangeable fraction (F1), reducible fraction (F2), oxidizable fraction (F3), and residual fraction (F4). To determine total metals content in alfalfa shoots, roots, and residual fraction, a digestion process was conducted using a mixture of HNO 3 , HCl, and HF. Soil samples (0.1 g each) were digested in a microwave oven (ETHOS UP, Milestone, Italy). And the resulting solution was diluted to a final volume of 100 ml and filtered through a 0.45 μm membrane. Additional procedural details are provided in the supplementary materials. The concentration of molybdenum (Mo) was measured using an atomic absorption spectrophotometer (TAS-990 SUPER AFG, China), allowing for accurate quantification of Mo content in the samples. Bioconcentration factor (BCF) is calculated with Eq.  . 1 [12pt]{minimal} $$\:=\:\:\:/}{\:\:\:}$$ Translocation factor (TF) is used to evaluate the ability of heavy metals to transfer within plants . 2 [12pt]{minimal} $$\:=\:\:\:}{\:\:\:}$$ The primer set used for the PCR amplification consisted of 338 F (ACTCCTACGGGAGGCAGCAG) and 806R (GGACTACHVGGGTWTCTAAT). The PCR conditions involved an initial denaturation step at 95 °C for 3 min, followed by 27 cycles of denaturation at 95 °C for 30 s, annealing at 55 °C for 30 s, extension at 72 °C for 45 s, and a final extension step at 72 °C for 10 min. Subsequently, the microbial community analysis in the soil was carried out using Illumina MiSeq sequencing. This sequencing technique, performed by Shanghai Majorbio Bio-Pharm Technology Co., Ltd. in Shanghai, China, allowed for the generation of high-quality sequence data for further analysis and interpretation of the microbial composition in the soil samples. Detailed instructions are in the supplementary materials. To ensure the reliability and accuracy of the study, rigorous measures were implemented for quality assurance and quality control. Duplicate samples, standard reference samples, and control treatments were utilized to validate the results. The recoveries of the chemical fractions of Mo and the total Mo were within the range of 90–105%, indicating the precision of the analytical methods employed. To account for variability, all tests were conducted in triplicate, with a standard deviation of less than 5%, ensuring consistency and reliability. The obtained results were then averaged to provide representative values. For clear and visually appealing graphical representations, all diagrams in this article were generated using Origin 2023b software. Effects of FA and HA on soil Figure illustrates the key soil characteristics resulting from different treatments. Compared to the control, the FA treatment (excluding FA0.1) led to an increase in CEC, with a positive correlation observed between CEC and FA concentration. In contrast, the HA treatment resulted in a decrease in CEC. All treatments exhibited higher AP content compared to the control treatment, with FA treatment generally showing an increasing trend in AP levels. The HA treatment, however, initially increased and then decreased AP content. For AN content, all treatments (except FA0.1) showed higher levels than the control, with FA0.5 and FA1 treatments resulting in particularly elevated AN concentration. In terms of AK, both FA and HA treatments exhibited lower AK levels compared to the control, indicating a reduction in AK content under treatment conditions. The initial soil solution pH in the CK was 7.71. Compared to CK, the pH in FA treatments and HA0.1 was lower, while HA0.5 and HA1 increased the pH. Building on existing heavy metal extraction methods, BCR developed an improved three-step extraction procedure for analyzing heavy metal species . The BCR method classifies heavy metals into four fractions: the exchangeable fraction (F1), the reducible fraction (F2), the oxidizable fraction (F3), and the residual fraction (F4). F1 includes water-extractable, exchangeable, and carbonate-bound metals, while F2 represents metals bound to leacheable Fe and Mn oxides and hydroxides. F3 encompasses metals associated with organic matter and sulphides, which can be separated. Finally, F4 corresponds to metals within the mineral lattice, which are not readily released into the environment. After harvest, soil samples from the different treatments underwent BCR fractionation to analyze metal speciation. Figure illustrates the distribution of Mo concentrations across the various fractions obtained from the BCR analysis. In the FA treatment, Mo was predominantly found in the F2 fraction (1.22–1.52 mg/kg), while the highest concentrations were observed in the F3 fraction (9.91–11.08 mg/kg). Similarly, in the HA treatment, Mo was mainly present in the F2 fraction (1.37–1.60 mg/kg), with the highest concentrations in the F3 fraction (10.38–10.90 mg/kg). The addition of FA and HA also led to an increased in Mo content in the F1 and F4 fractions. Compared to the CK, the proportions of Mo in the F1 and F4 fractions increased from 6.84% to 12.52% to 13.36 ~ 13.48%, 16.44 ~ 16.76% (FA) and 13.73 ~ 14.49%, 12.65 ~ 19.70% (HA). Conversely, the proportions of Mo in F2 and F3 fractions decreased from 10.51% to 70.13% in the CK treatment to 6.95 ~ 9.27%, 60.48 ~ 63.26% (FA) and 8.20 ~ 8.89%, 57.68 ~ 65.06% (HA). These changes suggests that the addition of FA and HA facilitated the conversion of Mo from the F2 and F3 fractions to the more stable F1 and F4 fractions. Notably, both FA and HA treatments exhibited a slight increase in Mo concentration in the F4 fraction, indicating a potential remobilization of Mo from the more labile fractions (F1, F2, and F3) to the residual fraction. This suggests that FA and HA may reduce the mobility and availability of Mo in the soil. The proportions of the residual Mo fractions were as follows: HA0.1 (19.70%) > FA0.1 (16.76%) > FA0.5(16.51%) > FA1 (16.44%) > HA1 (15.65%) > HA0.5 (12.65%) > CK (12.52%). The content and enrichment and transport energy of Mo in plants Throughout the experiment, robust growth was observed in Alfalfa plants across all treatments, indicating a high tolerance to Mo. The interaction between different plant species at the rhizosphere level can either promote or inhibit plant growth and metal absorption, depending on the specific crops involved . The total metal concentrations in Alfalfa shoots and roots are shown in Fig. (a) and Fig. (b). The addition of FA and HA influenced the transport of Mo, particularly regarding its distribution between roots and shoots. The application of FA inhibited Mo uptake by Alfalfa shoots. As the FA concentration increased from 0.1 to 1%, Mo content in the shoots decreased from 19.56 mg/kg in the control to 16.07, 16.60, and 15.30 mg/kg, respectively. In contrast, the application of HA led to an increase in Mo content in the shoots. In the control treatment, the shoot Mo concentration was 19.56 mg/kg, which increased to 23.24, 21.08, and 21.21 mg/kg as the HA application rate increased (Fig. (a)). Additionally, both FA and HA treatments resulted in higher Mo concentrations in the Alfalfa roots. In the control treatment, the Mo concentration in the root was 22.06 mg/kg. The HA0.5 treatment showed the highest increase (53.58 mg/kg, 2.43 times), followed by FA0.5 (34.87 mg/kg, 1.58 times), FA1 (30.26 mg/kg, 1.37 times), HA1 (26.51 mg/kg, 1.20 times), HA0.1 (26.21 mg/kg, 1.19 times), and FA0.1 (23.06 mg/kg, 1.05 times). Mo content in the roots initially increased and then decreased with higher FA and HA application rates, suggesting that moderate application of FA and HA promotes Mo absorption by Alfalfa roots. The BCF of plants for Mo under FA and HA treatments is shown in Fig. (c) and Fig. (d). The BCF is an indicator of a plant’s ability to absorb heavy metals. The BCF for Mo in the shoot of each treatment was as follows: HA0.1 (1.29) > HA0.5 (1.26) > HA1 (1.23) > CK (1.19) > FA0.5 (1.00) > FA0.1 (0.98) > FA1 (0.87). In the FA treatment, BCF values were generally less than 1, suggesting a limited capacity for Mo accumulation in the shoots. In contrast, HA treatments showed BCF value greater than 1, indicating a stronger enrichment capacity of Alfalfa shoots for Mo following HA application. For the BCF of Mo in the roots, the values were as follows: HA0.5 (3.20) > FA0.5 (2.10) > FA1 (1.73) > HA1 (1.54) > HA0.1 (1.46) > FA0.1 (1.41) > CK (1.34). The addition of FA and HA enhanced Mo accumulation in the root of Alfalfa. The BCF of the roots initially increased and then decreased with the increasing FA and HA concentrations, suggesting that appropriate doses of these substances positively influence the plant’s metal accumulation capacity. The TF reflects the distribution of heavy metal between the roots and shoots, with values greater than 1 indicating greater accumulation in the shoots and values less than 1 suggesting a higher accumulation in the roots , , . In this study, all TF values were less than 1, indicating that Mo was primarily concentrated in the roots. Furthermore, the addition of FA and HA significantly reduces the TF value, highlighting their role in limiting Mo translocation to the shoots. Effects of FA and HA on soil bacterial community In this study, a sequencing coverage rate exceeding 98% indicated sufficient sequencing depth. Figure (a-e) presents the assessment of alpha diversity indicators used to evaluate the richness and diversity of the bacterial community, including Shannon, Simpson, ACE, and Chao 1. The application of FA resulted in a decrease in both the richness and evenness of the bacterial community in the soil. Specifically, the ACE and Chao indices decreased from 2540.59 to 2474.75 in the CK to 2259.73, 1575.53, and 1196.00, respectively, as the FA application rate increased from 0.1 to 1%. Similarly, the Shannon index declined from 7.13 to 5.32 with increasing FA concentration. These findings suggest that FA has detrimental effects on soil microbial ecology. In contrast, the application of HA slightly increased the ACE and Chao index, with the exception of the FA0.5 treatment. The HA0.1 treatment showed the highest bacterial richness and diversity, as evidenced by increased ACE, Chao, and Shannon index values, alongside a reduced Simpson index compared to CK. Soil microorganisms are vital for carbon and nitrogen cycling, as well as for the decomposition of organic matter; thus, enhancing microbial communities is crucial for the restoration of contaminated soils . Previous studies have shown that heavy metals can induce shifts in microbial composition, and the characteristics of rhizosphere microorganisms are closely linked to the efficacy of plant-based remediation strategies . The impact of FA and HA on the composition of the rhizosphere bacterial community is illustrated in Fig. (a)-Fig. (c). The relative abundance of soil microbial communities was analyzed at the phylum level (Fig. a). where five dominant bacterial phyla were identified: Actinobacteriota (26.72%), Proteobacteria (23.54%), Firmicutes (18.49%), Acidobacteriota (11.35%), and Chloroflexi (8.22%) (Relative abundance > 5%). accounting for 88.31% of the total bacterial population. Additionally, several less abundant phyla, such as Gemmatimonadota (3.63%), Myxococcota (2.05%), Bacteroidota (1.91%), and Methylomirabilota (0.74%), were also identified despite their lower relative abundances. The Circos plot (Fig. b) illustrates the community composition at the phylum level for each treatment and the distribution of the top 10 dominant phyla across all treatments. Regardless of the FA and HA application rates, nine dominant bacterial phyla were consistently identified in the soil samples: Actinobacteriota (16.29–32.07%), Proteobacteria (18.22–27.31%), Firmicutes (5.39–52.51%), Acidobacteriota (1.93–20.59%), Chloroflexi (3.07–11.55%), Gemmatimonadota (2.24–4.50%), Myxococcota (1.55–2.55%), Bacteroidota (1.50–2.35%), and Methylomirabilota (0.32–1.17%). The relative abundances of Proteobacteria, Myxococcota, and Bacteroidota were lower than those in the control treatment across all treatments. As the FA application rate increased from 0.1 to 1%, the relative abundances of Firmicutes increased substantially, from 8.1% in the control to 12.50%, 38.51%, and 52.51%, respectively, in the FA treatment. Figure illustrates the key soil characteristics resulting from different treatments. Compared to the control, the FA treatment (excluding FA0.1) led to an increase in CEC, with a positive correlation observed between CEC and FA concentration. In contrast, the HA treatment resulted in a decrease in CEC. All treatments exhibited higher AP content compared to the control treatment, with FA treatment generally showing an increasing trend in AP levels. The HA treatment, however, initially increased and then decreased AP content. For AN content, all treatments (except FA0.1) showed higher levels than the control, with FA0.5 and FA1 treatments resulting in particularly elevated AN concentration. In terms of AK, both FA and HA treatments exhibited lower AK levels compared to the control, indicating a reduction in AK content under treatment conditions. The initial soil solution pH in the CK was 7.71. Compared to CK, the pH in FA treatments and HA0.1 was lower, while HA0.5 and HA1 increased the pH. Building on existing heavy metal extraction methods, BCR developed an improved three-step extraction procedure for analyzing heavy metal species . The BCR method classifies heavy metals into four fractions: the exchangeable fraction (F1), the reducible fraction (F2), the oxidizable fraction (F3), and the residual fraction (F4). F1 includes water-extractable, exchangeable, and carbonate-bound metals, while F2 represents metals bound to leacheable Fe and Mn oxides and hydroxides. F3 encompasses metals associated with organic matter and sulphides, which can be separated. Finally, F4 corresponds to metals within the mineral lattice, which are not readily released into the environment. After harvest, soil samples from the different treatments underwent BCR fractionation to analyze metal speciation. Figure illustrates the distribution of Mo concentrations across the various fractions obtained from the BCR analysis. In the FA treatment, Mo was predominantly found in the F2 fraction (1.22–1.52 mg/kg), while the highest concentrations were observed in the F3 fraction (9.91–11.08 mg/kg). Similarly, in the HA treatment, Mo was mainly present in the F2 fraction (1.37–1.60 mg/kg), with the highest concentrations in the F3 fraction (10.38–10.90 mg/kg). The addition of FA and HA also led to an increased in Mo content in the F1 and F4 fractions. Compared to the CK, the proportions of Mo in the F1 and F4 fractions increased from 6.84% to 12.52% to 13.36 ~ 13.48%, 16.44 ~ 16.76% (FA) and 13.73 ~ 14.49%, 12.65 ~ 19.70% (HA). Conversely, the proportions of Mo in F2 and F3 fractions decreased from 10.51% to 70.13% in the CK treatment to 6.95 ~ 9.27%, 60.48 ~ 63.26% (FA) and 8.20 ~ 8.89%, 57.68 ~ 65.06% (HA). These changes suggests that the addition of FA and HA facilitated the conversion of Mo from the F2 and F3 fractions to the more stable F1 and F4 fractions. Notably, both FA and HA treatments exhibited a slight increase in Mo concentration in the F4 fraction, indicating a potential remobilization of Mo from the more labile fractions (F1, F2, and F3) to the residual fraction. This suggests that FA and HA may reduce the mobility and availability of Mo in the soil. The proportions of the residual Mo fractions were as follows: HA0.1 (19.70%) > FA0.1 (16.76%) > FA0.5(16.51%) > FA1 (16.44%) > HA1 (15.65%) > HA0.5 (12.65%) > CK (12.52%). Throughout the experiment, robust growth was observed in Alfalfa plants across all treatments, indicating a high tolerance to Mo. The interaction between different plant species at the rhizosphere level can either promote or inhibit plant growth and metal absorption, depending on the specific crops involved . The total metal concentrations in Alfalfa shoots and roots are shown in Fig. (a) and Fig. (b). The addition of FA and HA influenced the transport of Mo, particularly regarding its distribution between roots and shoots. The application of FA inhibited Mo uptake by Alfalfa shoots. As the FA concentration increased from 0.1 to 1%, Mo content in the shoots decreased from 19.56 mg/kg in the control to 16.07, 16.60, and 15.30 mg/kg, respectively. In contrast, the application of HA led to an increase in Mo content in the shoots. In the control treatment, the shoot Mo concentration was 19.56 mg/kg, which increased to 23.24, 21.08, and 21.21 mg/kg as the HA application rate increased (Fig. (a)). Additionally, both FA and HA treatments resulted in higher Mo concentrations in the Alfalfa roots. In the control treatment, the Mo concentration in the root was 22.06 mg/kg. The HA0.5 treatment showed the highest increase (53.58 mg/kg, 2.43 times), followed by FA0.5 (34.87 mg/kg, 1.58 times), FA1 (30.26 mg/kg, 1.37 times), HA1 (26.51 mg/kg, 1.20 times), HA0.1 (26.21 mg/kg, 1.19 times), and FA0.1 (23.06 mg/kg, 1.05 times). Mo content in the roots initially increased and then decreased with higher FA and HA application rates, suggesting that moderate application of FA and HA promotes Mo absorption by Alfalfa roots. The BCF of plants for Mo under FA and HA treatments is shown in Fig. (c) and Fig. (d). The BCF is an indicator of a plant’s ability to absorb heavy metals. The BCF for Mo in the shoot of each treatment was as follows: HA0.1 (1.29) > HA0.5 (1.26) > HA1 (1.23) > CK (1.19) > FA0.5 (1.00) > FA0.1 (0.98) > FA1 (0.87). In the FA treatment, BCF values were generally less than 1, suggesting a limited capacity for Mo accumulation in the shoots. In contrast, HA treatments showed BCF value greater than 1, indicating a stronger enrichment capacity of Alfalfa shoots for Mo following HA application. For the BCF of Mo in the roots, the values were as follows: HA0.5 (3.20) > FA0.5 (2.10) > FA1 (1.73) > HA1 (1.54) > HA0.1 (1.46) > FA0.1 (1.41) > CK (1.34). The addition of FA and HA enhanced Mo accumulation in the root of Alfalfa. The BCF of the roots initially increased and then decreased with the increasing FA and HA concentrations, suggesting that appropriate doses of these substances positively influence the plant’s metal accumulation capacity. The TF reflects the distribution of heavy metal between the roots and shoots, with values greater than 1 indicating greater accumulation in the shoots and values less than 1 suggesting a higher accumulation in the roots , , . In this study, all TF values were less than 1, indicating that Mo was primarily concentrated in the roots. Furthermore, the addition of FA and HA significantly reduces the TF value, highlighting their role in limiting Mo translocation to the shoots. In this study, a sequencing coverage rate exceeding 98% indicated sufficient sequencing depth. Figure (a-e) presents the assessment of alpha diversity indicators used to evaluate the richness and diversity of the bacterial community, including Shannon, Simpson, ACE, and Chao 1. The application of FA resulted in a decrease in both the richness and evenness of the bacterial community in the soil. Specifically, the ACE and Chao indices decreased from 2540.59 to 2474.75 in the CK to 2259.73, 1575.53, and 1196.00, respectively, as the FA application rate increased from 0.1 to 1%. Similarly, the Shannon index declined from 7.13 to 5.32 with increasing FA concentration. These findings suggest that FA has detrimental effects on soil microbial ecology. In contrast, the application of HA slightly increased the ACE and Chao index, with the exception of the FA0.5 treatment. The HA0.1 treatment showed the highest bacterial richness and diversity, as evidenced by increased ACE, Chao, and Shannon index values, alongside a reduced Simpson index compared to CK. Soil microorganisms are vital for carbon and nitrogen cycling, as well as for the decomposition of organic matter; thus, enhancing microbial communities is crucial for the restoration of contaminated soils . Previous studies have shown that heavy metals can induce shifts in microbial composition, and the characteristics of rhizosphere microorganisms are closely linked to the efficacy of plant-based remediation strategies . The impact of FA and HA on the composition of the rhizosphere bacterial community is illustrated in Fig. (a)-Fig. (c). The relative abundance of soil microbial communities was analyzed at the phylum level (Fig. a). where five dominant bacterial phyla were identified: Actinobacteriota (26.72%), Proteobacteria (23.54%), Firmicutes (18.49%), Acidobacteriota (11.35%), and Chloroflexi (8.22%) (Relative abundance > 5%). accounting for 88.31% of the total bacterial population. Additionally, several less abundant phyla, such as Gemmatimonadota (3.63%), Myxococcota (2.05%), Bacteroidota (1.91%), and Methylomirabilota (0.74%), were also identified despite their lower relative abundances. The Circos plot (Fig. b) illustrates the community composition at the phylum level for each treatment and the distribution of the top 10 dominant phyla across all treatments. Regardless of the FA and HA application rates, nine dominant bacterial phyla were consistently identified in the soil samples: Actinobacteriota (16.29–32.07%), Proteobacteria (18.22–27.31%), Firmicutes (5.39–52.51%), Acidobacteriota (1.93–20.59%), Chloroflexi (3.07–11.55%), Gemmatimonadota (2.24–4.50%), Myxococcota (1.55–2.55%), Bacteroidota (1.50–2.35%), and Methylomirabilota (0.32–1.17%). The relative abundances of Proteobacteria, Myxococcota, and Bacteroidota were lower than those in the control treatment across all treatments. As the FA application rate increased from 0.1 to 1%, the relative abundances of Firmicutes increased substantially, from 8.1% in the control to 12.50%, 38.51%, and 52.51%, respectively, in the FA treatment. Figure presents the Pearson correlation analysis, revealing significant correlations between CEC, AP, and AN with various microbial indicators. This suggests that these soil parameters are strongly associated with microbial activity. HS are complex compounds known to form complexes with metal ions, thus influencing the mobility and bioavailability of metals . HS contains diverse functional groups, including hydroxyl, aldehyde, ester, and carboxyl groups, which can participate in adsorption and complexation reactions with heavy metals, thereby altering their forms and bioavailability in soil . For instance, combined applications of passivators agents (such as phosphate, humic acid, and fly ash) have been shown to convert heavy metals such as Pb and Cd from more mobility and toxicity . Both FA and HA can promote stabilization of exogenous metals in soil, with the transformation effect of FA increasing with higher application rates. However, FA appears to have a relatively weaker stabilization effect and, at high doses, may even reduce metal stabilization. Plants actively regulate the concentration of elements within their tissues under heavy metal stress . In the case of Alfalfa, root tissues showed a higher Mo concentration than shoots, indicating preferential Mo accumulation in roots. Previous studies have also observed this pattern, with increased Mo concentrations in plants following HA application. For example, when the HA application rate was increased, Mo levels in shoots and roots rose from 1.74 mg/kg and 0.04 mg/kg to 2.91 mg/kg and 2.40 mg/kg, respectively . Additionally, it has been reported that a 2% HA addition raised shoot concentration from 30.9 mg/kg to 39.9 mg/kg, like due to a pH reduction that facilitated Cd migration. Plants may also absorb complexes formed between heavy metals like Cd and humic acid fragments, which are derived from microbial decomposition or self-decomposition . Humic acid’s ability to form metal complexes makes it effective for bioremediation of heavy metals, while FA can inhibit meal uptake, suggesting its potential for reducing metal accumulation in acidic, contaminated soils . While some studies suggest that humic substances not significantly alter the chemical form of Mo , contrasting findings indicate that specific inorganic metal complexes can affect Mo mobility in the rhizosphere, thereby influencing root uptake and potentially altering heavy metal accumulation in plant tissues , . These findings align with the current study’s results. Microbial diversity is a key indicator of ecosystem functionality. Soil microorganisms regulate numerous soil functions, including soil quality maintenance and plant resilience . In this study, the FA and HA applications impacted alpha diversity indices, consistent with previous reports on the negative effects of Cd on bacterial diversity . Biochar addition has also been shown to enhance microbial richness and diversity, thereby improving soil health across various soil types . Soil microbial communities are strongly influenced by soil physicochemical properties. This study found a negative correlation between CEC and Actinobacteriota ( p <0.05), Chloroflexi ( p <0.05), and Methylomirabilota ( p <0.05). while a positive correlation was observed with Firmicutes ( p < 0.01) (Fig. ). AP showed a negative correlation with Gemmatimonadota ( p <0.01), Myxococcota ( p <0.05), and Bacteroidota ( p <0.05). A negative association was found with unclassified Bacteria ( p <0.01), whereas a positive correlation was observed with Firmicutes ( p <0.05). Root Mo content showed a negative correlation with Bacteroidota ( p <0.05). Proteobacteria displayed a negative correlation with F4 ( p <0.01) and a positive correlation with F3 ( p <0.05), potentially due to their sensitivity to heavy metals. These findings suggest that CEC, AP, AN, and specific microbial indicators play a crucial role in shaping the soil bacterial communities. This study utilized pot incubation experiments to evaluate the immobilization efficiency of Humic substances (FA and HA) in reducing the mobility and bioavailability of molybdenum (Mo) in agricultural soils. In Alfalfa cropping systems, FA and HA treatments effectively diminished soil Mo mobility and availability during the incubation period, influencing its transport and distribution within plant roots and shoots. Notably, FA showed a stronger impact on root Mo accumulation, whereas HA exhibited a more pronounced effect in shoots. Additionally, FA and HA applications altered soil bacterial abundance and diversity, leading to shifts in the microbial community. Specifically, FA application increased the diversity of firmicutes, while variations in Actinobacteriota, firmicutes, acidobacteriota, chloroflexi, gemmatimonadota, and myxococcota were correlated with changes in soil CEC, AP, and AN. Among the two humic substances, HA demonstrated a greater potential for remediating metal-contaminated soil. Overall, Humic substances (FA and HA) offer an eco-friendly to enhance the remediation of Mo-contaminated agricultural soils. Further studies are recommended to investigate the long-term impacts of these treatments on soil microorganisms and plants health. Below is the link to the electronic supplementary material. Supplementary Material 1
Proteomic Analysis of the Murine Liver Response to Oral Exposure to Aflatoxin B1 and Ochratoxin A: The Protective Role to Bioactive Compounds
776c066a-e211-4600-a590-c53227fd48ac
11768807
Biochemistry[mh]
Aflatoxin B1 (AFB1) and Ochratoxin A (OTA) are two major mycotoxins which contaminate a wide range of food commodities, especially cereals and derived products, representing a serious concern for human and animal health . These toxic compounds are secondary metabolites produced by filamentous fungi that belong primarily to Aspergillus and Penicillium species and grow on crops under conditions of improper storage and humidity. Within the two, AFB1 is the most potent hepatotoxic and carcinogenic member of the aflatoxin family whereas OTA is a nephrotoxic and immunosuppressive compound. Due to their severe toxicity, AFB1 and OTA are classified by the International Agency for Research on Cancer (IARC) as a Group 1 carcinogen (carcinogenic to human) and Group 2B carcinogen (possibly carcinogenic to humans), respectively . Over the years, it has been demonstrated that these toxins are widespread in various species of cereals such as wheat, maize, barley, and rice and can occur during pre-harvest, post-harvest, or storage stages. In fact, diverse climatic factors including high humidity and temperature markedly increase mold growth, leading to its contamination. Therefore, the consumption of contaminated cereals and their derivatives, such as flour, breakfast cereals, and processed food products, represents a significant health hazard, since these toxins are stable even under cooking and processing conditions . With regard to their toxicological effects on humans, it has been demonstrated that long-term exposure to AFB1 and OTA can lead to several health disorders, among which is the onset of liver damage and cancer . Accordingly, the liver plays a central role in AFB1 and OTA metabolism. In fact, it has been demonstrated that AFB1 is primarily bioactivated by hepatic microsomal phase I cytochrome P450 enzymes, which are able to convert it into its electrophilic reactive epoxide form (AFBO). Consequently, this metabolite form adducts to DNA and proteins, causing mutations and promoting liver carcinogenesis. However, AFBO can also be metabolized by phase II detoxifying enzymes, leading to its degradation and elimination . Similarly, OTA is biotransformed in the liver by phase I and II enzymes, but nonetheless, it is not the sole organ to metabolize this toxin . Given the broad presence of AFB1 and OTA in food commodities and especially the difficulty of their elimination, the research has focused on the possibility of employing substances capable of modifying their metabolism and reducing their bioaccumulation. Lactic acid bacteria (LAB), for instance, are able to increase the quality of food matrices by producing a rapid fermentation and synthesizing a wide range of beneficial molecules . Among them, organic acids reduce the pH of the substrates, preventing growth of undesirable microorganisms such as mycotoxigenic fungi . In addition to probiotics, plant-based foods like pumpkin (P) ( Cucurbita spp.) are rich in antioxidants, making them effective in combating oxidative stress. In fact, they contain high levels of bioactive compounds such as carotenoids, vitamin C, and phenolic compounds, which contribute to its strong antioxidant capacity . Moreover, these compounds help reduce chronic inflammation, a factor in diseases like cancer and cardiovascular conditions, making pumpkin a valuable dietary component for mitigating the harmful effects of environmental toxins. In this study, fermented whey (FW) and P as functional ingredients were used either individually or in combination to replicate a realistic scenario in the Mediterranean diet. Moreover, the intake of a single functional compound is implausible as natural foods always contain numerous bioactive compounds . From the perspective of the food industry, the production of 1 kg of cheese generates about 9 L of whey, almost half of which is disposed of as waste. This disposal, often untreated, poses significant environmental problems . Considering that whey offers a promising solution to counter the harmful effects of mycotoxins, harnessing the bioactive components of fermented whey not only solves whey disposal problems but also provides a sustainable way to mitigate the associated risks, turning an environmental liability into a valuable resource . Moreover, several studies have focused on its hepatoprotective effects against acute or chronic toxicity induced by xenobiotics . It is also important to emphasize that the use of proteomics has proven to be a valuable tool for deepening the understanding of the mechanisms of action that cause hepatotoxicity, since it enables the identification and quantification of specific proteins associated with toxic responses and protective pathways, which are derived from FW and P interventions. Furthermore, it is a key element for identifying important biomarkers related to various liver diseases and even cancer . In light of this, the aim of the present study was to investigate the advantageous role of goat milk FW and P as functional ingredients in safeguarding the sub-chronic hepatotoxic effects of AFB1 and OTA in male and female rats through a proteomics approach. 2.1. Identification and Quantification of Proteins Gel-free shotgun proteomics analysis of rat liver was initiated by identifying peptides features through Spectrum Mill MS Proteomics Workbench Package Rev BI.07.09 (Agilent Technologies, Santa Clara, CA, USA). Thereafter, the proteins with different abundances between groups were statistically filtered by Mass Profiler Professional 15.0 version software (Agilent Technologies, Santa Clara, CA, USA) through an unpaired t -test ( p < 0.05) distinguishing males from females of each experimental group. More specifically, each group exposed to single or combined mycotoxins was compared with its counterpart supplemented with functional ingredients, once with FW and once with FW + P, in order to identify the DEPs involved. In male rats exposed to mycotoxins ( A), 95 proteins were differentially expressed in the AFB1 group compared to the male control group, 67 with OTA versus the control group, and 81 with the combination (AFB1 + OTA vs. control). In females ( B), more DEPs were observed for each comparison: 134 were identified with AFB1, 101 with OTA, and 140 with the combination. In male rats exposed to FW ( A), 116 proteins were differentially expressed in the FW + AFB1 group with respect to the one with only AFB1, 71 with OTA versus OTA group, and 122 with the combination (FW + AFB1 + OTA vs. AFB1 + OTA). In females ( B), a similar scenario is observed: 104 were identified with AFB1, 77 with OTA, and 115 with the combination. In the presence of FW and P , the DEPs figure was higher than single FW when mycotoxins were administered individually, reporting a number of 127 proteins for males ( A) and 137 for females ( B) with AFB1 compared to 158 and 190 for males and females exposed to OTA, respectively. However, in the presence of both mycotoxins, the number decreased to 145 for males and 162 for females. 2.2. Gene Ontology of Differentially Expressed Proteins Functional annotation of the differentially expressed proteins (DEPs) was performed using the DAVID database in order to identify the most significant biological processes (BPs) and molecular functions (MFs) involved in DEPs found in each comparison. The feed exposure to AFB1 affected hepatic metabolism. Compared to control, it altered the expression of urea cycle, glycolysis and gluconeogenesis, and amino acid biosynthesis proteins, which is in line with the results found by Sun et al. (2019) that reported an upregulation of proteins involved in cancer-related pathways of metabolism, amino acid biosynthesis, and chemical carcinogenesis . Furthermore, it caused oxidative stress. These results were mostly observed in females in which, besides Hsp70 overregulation, Gpx1 and Sod1 downregulated expression was identified. Exposure to OTA feed causing an altered response to oxidative stress was evident in both sexes. The effects were similar, but the proteins involved were different, except for Prdx1 which resulted downregulation in both sexes. Its downregulation has been associated with the activation of the PI3K/AKT pathway and therefore the promotion of cancer . In addition, ATP synthase F1 subunit beta (Atp5f1b), an important protein for hepatic mitochondrial function, was significantly downregulated as much in males as in females. Several studies demonstrated that the reduction in its expression exacerbates mitochondrial dysfunction and oxidative stress . When rats were fed with AFB1 + OTA, changes occurred in the expression of proteins involved in metabolism such as the urea cycle, glycolysis/gluconeogenesis, and amino acid biosynthesis as in the AFB1 case. In addition, oxidative effects were observed. Specifically, in females, several antioxidant enzymes (Gta1, Gstm1, Sod1, and Cat) were downregulated. Along with these findings, the downregulation of these enzymes has been associated with the onset of diverse cancers . Only after exposure to both mycotoxins did a reduction in the expression of structural chromatin constituents occur. Reduced expression of these components could have a negative effect on the maintenance of genome integrity. After exposure to mycotoxins and the individual functional ingredient (FW), the response to xenobiotic stimulus emerged as the most significant BP in both males ( n = 18 to 20) and females ( n = 9 to 17) ( A,B). Likewise, similar findings were observed when rats were exposed to both functional ingredients . The response to xenobiotic stimulus was the most common biological process in male (n = 18–20) and female rats (n = 9), though it was more pronounced in males. According to that, it is well known that liver plays a crucial role in metabolizing and detoxifying xenobiotics via phase I and phase II enzymes, and many of the proteins involved are key players in the detoxification pathways activated by AFB1 and OTA exposure. In fact, the upregulation of oxidative proteins suggests higher oxidative stress in the liver, which could indeed be part of a positive feedback mechanism that the liver cells use to maintain homeostasis. While these parameters may not directly reflect hepatotoxicity, they offer valuable insight into the liver’s adaptive response to stress . Among the proteins affected, the most significantly altered were the mitochondrial enzyme involved in ketogenesis Hmgcs2, glutathione S-transferases (Gsta1, Gstm1, Mgst1) which conjugate toxic metabolites with glutathione to facilitate their excretion, and oxidative stress biomarkers superoxide dismutase 1 (Sod1) and catalase (Cat). Moreover, heat shock proteins such as Hspa8 and Hspd1 which protect cells from stress-induced damage were upregulated with the combination of toxins (Log Fold Change (FC) > 2) but not with the individual exposure (LogFC < −2). Additionally, enzymes implicated in energy metabolism and cellular repair (Aldh9a1, Adcy1) were strongly downregulated in the combined exposure (LogFC < −1.80) but not in the single ones, suggesting a synergistic effect of the toxins. According to that, previous studies confirmed the hepatotoxic modulation of xenobiotics metabolizing enzymes in the presence of AFB1, but at the same time, the capacity of coffee extracts to activate detoxifying enzymes for its degradation was demonstrated . Likewise, the degradation of AFB1 was recently proven by employing diverse bacteria species, as well as different waste products containing high amounts of phenolic compounds . Moreover, the modulation of the xenobiotic transformation system induced by OTA was involved in hepatic metabolism processes in vitro and in vivo . However, as in this case, it has been demonstrated that plant extracts and their bioactive compounds may act by inducing xenobiotic detoxification and biotransformation pathways . The following foremost BP was related to liver development with both functional ingredients and in the two sexes, reporting a number of 10 to 13 findings for males ( A and A) and 8 to 10 for females ( B and B). This biological mechanism is essential for growth, differentiation, and maturation of the liver and is tightly regulated by various signaling pathways and proteins that control cellular functions such as proliferation, differentiation, and metabolic adaptation. In the context of mycotoxin exposure, proteins such as Atp5f1b, UDP glucuronosyltransferase family 1 member A6 (Ugt1a6), adenylate kinase 2 (Ak2), and aldehyde dehydrogenase 9 family member A1 (Aldh9a1) and ornithine transcarbamylase (Otc) were downregulated when the rats were exposed to mycotoxins individually. Accordingly, diverse studies have reported the healthful effect of bioactive components contained in food in the increase in cellular antioxidant defense systems at the hepatic level . Therefore, these results indicate that the combined action of these bioactive ingredients may actively participate in favorable mitigation processes. However, when both AFB1 and OTA were administered together, the expression of these proteins was notably increased, suggesting an adaptive response by the liver to counteract the toxic effects and promote recovery. Additionally, BPs related to glutathione metabolism, apoptotic processes, gluconeogenesis, response to nutrients, and circadian rhythm were also affected, but in a lower manner. In terms of MFs ( C,D and C,D), identical protein binding was the most enriched function in both sexes (n = 30 to 48), followed by ATP binding (n = 18 to 34), enzyme binding (n = 11 to 20), and ATP hydrolysis activity (n = 10 to 20). To deepen these results, a heatmap was generated from the proteomic data to visually represent the general changes in protein expression following exposure to AFB1, OTA, and their combination (AFB1 + OTA) in the presence of FW or FW + P in both male and female rats compared to control feed . The heatmap displays downregulated proteins (green) and upregulated proteins (red), providing a clear overview of the proteomic response to mycotoxin exposure across the different conditions. The detailed list of DEPs altered in BPs is included in for male and for female. Steady outcomes were perceived in both sexes, revealing matching trends in protein expression. When exposing rats to each mycotoxin separately supplemented with FW or FW + P, the preponderant part of proteins displayed a moderate downregulation (LogFC < −2), particularly in the AFB1 group, hinting at the positive action of bioactive compounds against the toxin. However, occasionally with OTA, a few proteins were upregulated, especially in males and with both ingredients. Differently, when rats were exposed to the mycotoxin mixture, the expression profile outlined a significant upregulation (LogFC > 2), particularly in the combined group (AFB1 + OTA + FW + P). This condition exhibited a potential synergistic consequence of the two toxins, where the simultaneous exposure may exacerbate the biological response compared to a single one. In line with that, numerous investigations have previously reported an AFB1 and OTA additive effect in vivo and in vitro, emphasizing the potential risk of their co-occurrence . A recent metabolomic study, for instance, reported a synergistic effect of AFM1 and OTA in mice livers, displaying the alteration of metabolites related to oxidative stress . 2.3. Metabolic Pathways Analysis Understanding the mechanism of action of pathways involved in the primary functioning of the liver has helped to clarify the metabolic alterations that occur in the presence of AFB1 and OTA and, notably, verify the beneficial role of the functional ingredients. For this purpose, the KEGG visualization tool related to DEPs in this study allowed the identification of the main processes altered in rats exposed to feed tainted with mycotoxins and combined with FW or FW + P, reporting that the most significant signaling pathways affected were predominantly linked to metabolic responses . In fact, these routes showed the highest number of modified features, higher in males exposed to FW and AFB1 alone ( n = 54) or in combination with OTA ( n = 52) whereas, in females, they were lower with single mycotoxins ( n = 34) than combined ( n = 52). In the presence of pumpkin, the situation was reversed between the genders. However, it is well known that the liver is the largest metabolic organ which plays a specific role in digestion, metabolism, absorption, and transport of nutrients, biodegradation of toxic compounds, and processing of various hormones and cytokines secreted by the viscera . Moreover, it is essential for the biosynthesis of amino acids (AAs) that serve as the building blocks for several key proteins as well as being the center of glucose metabolism and fatty acid diverting . In the present investigation, carbon metabolism emerged among the most commonly altered pathways , along with biosynthesis of AA, suggesting consequent disruptions in energy production besides cellular processes. Indeed, one-carbon metabolic pathways aim to activate serine metabolism to glycine, the glycine cleavage system (GCS), and the metabolism of choline and other amino acids. For that reason, recent studies indicated that cancer cells may modify or become increasingly dependent on these pathways in order to maintain the supply of carbon units which are necessary for their proliferation . Additionally, chemical carcinogenesis of ROS and hepatocellular carcinoma were also impacted ( n > 10), particularly relevant given the liver’s central role in metabolizing AFB1 and OTA. Focusing once more on the overall expression of DEPs, the heatmap revealed a distinct pattern in protein expression across the multiple groups. The detailed list of DEPs altered in MPs is included in for male and for femal. In this case, when exposing rats to AFB1 + FW and AFB1 + FW + P, an extend downregulation can be observed compared with AFB1 (Log < −1.6). Very small differences between sexes were found in the downregulation trend observed with AFB1 exposure with the inclusion of functional ingredients compared with the mycotoxins only. With OTA, the downregulation trend can be observed more clearly when adding FW + P, especially in females . Nonetheless, in the latter, an upregulation of certain specimens is also displayed, reporting higher values in females (LogFC > 2.0) than in males (LogFC > 1.7). Conversely, when both AFB1 and OTA were administered together, a clear upregulation of proteins was observed, suggesting once again a synergistic effect between the two toxins. Nevertheless, the increase in protein expression in response to combined toxin exposure was slightly more pronounced in females (LogFC up to 4.7) ( A) than in males (LogFC up to 4.5) ( B), further supporting the hypothesis of a stronger synergistic effect in females. Thus, the contribution of FW or FW + P in the diet modulated the toxic effects of AFB1 and OTA when they were administered singularly, highlighting the potential for combined exposures to exert stronger effects than individual toxins. Remarkably, several proteins identified through the proteomic analysis were linked to the hepatocellular carcinoma (HCC) pathway, a key area of concern following exposure to mycotoxins. In fact, HCC is a primary form of liver cancer which often results from chronic exposure to various toxic agents and is the sixth most common malignancy worldwide . In this study, several proteins involved in liver function and cancer development were significantly affected by AFB1 and OTA exposure . Among them, important members of the actin family such as beta actin (Actb), beta-actin 2 (Actbl2), actin gamma 1–1 (Actg1l1), and actin gamma 1 (Actg1) often dysregulated in cancer were downregulated under exposure to individual mycotoxin and bioactive ingredients, hinting at their helpful role. In fact, these actin monomers are fundamental for cytoskeletal polymerization and integrity and are directly implicated in the maintenance of assembly and turnover of diverse cellular processes and were upregulated in male and female rats after exposure to mycotoxins individually. Among them, Actb, Actg1, and Actin 5 (Act5) were strongly upregulated after single administration (LogFC > 14). Additionally, antioxidant response proteins belonging to the glutathione S-transferase (GST) and NAD(P)H quinone oxidoreductase 1 (NQO) families were significantly downregulated in male and female rats, with LogFC < −2.6 for FW + AFB1 and LogFC < 1.8 with both functional ingredients. On the contrary, in the combined exposure (AFB1 + OTA), expression was increased (LogFC > 2), as was the case in the single mycotoxin administration of AFB1 (LogFC > 10), highlighting a shift in cellular signaling that could favor tumorigenesis. In fact, GST is a key regulator of phase II enzymes which protect cells from oxidative stress in cancer , and herein, six types of GST-related proteins were broadly altered: GSTa1, GSTa2, GSTa3, GSTm1, GSTm2, and MGST1. Likewise, the upregulation of NPQ1, as in this case, has been associated with human liver injury . Overall, the expression of the abovementioned proteins, particularly in combination with functional ingredients, could serve as potential biomarkers for liver carcinogenesis through the identification of important targets for therapeutic intervention. The present study highlights significant sex-specific differences in the hepatic response to mycotoxins (AFB1 and OTA) and their mitigation by bioactive compounds such as FW and P. These differences were evident in the number of differentially expressed proteins (DEPs), the biological processes (BPs) affected, and the pathways modulated under various experimental conditions. Female rats consistently exhibited a higher number of DEPs compared to males across all experimental groups. This disparity suggests a greater sensitivity of females to mycotoxin-induced hepatic changes. For instance, in response to AFB1 exposure, females exhibited 134 DEPs compared to 95 in males, while the combined AFB1 + OTA exposure amplified this effect further (140 DEPs in females vs. 81 in males). These findings align with previous reports suggesting that sex hormones may influence xenobiotic metabolism and the oxidative stress response, potentially rendering females more vulnerable to hepatotoxic effects . The observed downregulation of key antioxidant enzymes such as Gpx1, Sod1, and Cat in females further corroborates this hypothesis, as it indicates a diminished capacity to counteract oxidative stress. In contrast, males displayed a more robust response to xenobiotic stimuli, suggesting a higher activation of detoxification pathways mediated by phase I and phase II enzymes. Supplementation with FW or FW + P exhibited protective effects in both sexes, though the mechanisms and extent of mitigation differed. The response to xenobiotic stimuli emerged as a predominant biological process in males (18–20 proteins involved) compared to females (9–17 proteins), reflecting a sex-dependent variation in detoxification capacity. Conversely, females demonstrated a greater modulation of oxidative-stress-related pathways and metabolic processes, including amino acid biosynthesis and the urea cycle. Interestingly, the combination of FW and P enhanced the mitigation effects, with a higher number of DEPs observed in both sexes compared to FW alone. For instance, FW + P supplementation in AFB1-exposed females resulted in 137 DEPs, compared to 127 in males. These findings suggest a synergistic effect of FW and P in modulating hepatic responses to mycotoxins, particularly in pathways related to cellular repair and antioxidant defense. Combined exposure to AFB1 and OTA exacerbated the hepatotoxic effects, particularly in females, as evidenced by a more pronounced upregulation of proteins (LogFC up to 4.7 in females vs. 4.5 in males). This suggests a synergistic interaction between the two mycotoxins that overwhelms the hepatic defense mechanisms, especially in females. The adaptive response observed in females, characterized by an increase in structural chromatin proteins and metabolic enzymes, may represent an effort to counteract the heightened toxic burden. However, this response appears to be less effective compared to the more stable protein expression profiles observed in males. Gel-free shotgun proteomics analysis of rat liver was initiated by identifying peptides features through Spectrum Mill MS Proteomics Workbench Package Rev BI.07.09 (Agilent Technologies, Santa Clara, CA, USA). Thereafter, the proteins with different abundances between groups were statistically filtered by Mass Profiler Professional 15.0 version software (Agilent Technologies, Santa Clara, CA, USA) through an unpaired t -test ( p < 0.05) distinguishing males from females of each experimental group. More specifically, each group exposed to single or combined mycotoxins was compared with its counterpart supplemented with functional ingredients, once with FW and once with FW + P, in order to identify the DEPs involved. In male rats exposed to mycotoxins ( A), 95 proteins were differentially expressed in the AFB1 group compared to the male control group, 67 with OTA versus the control group, and 81 with the combination (AFB1 + OTA vs. control). In females ( B), more DEPs were observed for each comparison: 134 were identified with AFB1, 101 with OTA, and 140 with the combination. In male rats exposed to FW ( A), 116 proteins were differentially expressed in the FW + AFB1 group with respect to the one with only AFB1, 71 with OTA versus OTA group, and 122 with the combination (FW + AFB1 + OTA vs. AFB1 + OTA). In females ( B), a similar scenario is observed: 104 were identified with AFB1, 77 with OTA, and 115 with the combination. In the presence of FW and P , the DEPs figure was higher than single FW when mycotoxins were administered individually, reporting a number of 127 proteins for males ( A) and 137 for females ( B) with AFB1 compared to 158 and 190 for males and females exposed to OTA, respectively. However, in the presence of both mycotoxins, the number decreased to 145 for males and 162 for females. Functional annotation of the differentially expressed proteins (DEPs) was performed using the DAVID database in order to identify the most significant biological processes (BPs) and molecular functions (MFs) involved in DEPs found in each comparison. The feed exposure to AFB1 affected hepatic metabolism. Compared to control, it altered the expression of urea cycle, glycolysis and gluconeogenesis, and amino acid biosynthesis proteins, which is in line with the results found by Sun et al. (2019) that reported an upregulation of proteins involved in cancer-related pathways of metabolism, amino acid biosynthesis, and chemical carcinogenesis . Furthermore, it caused oxidative stress. These results were mostly observed in females in which, besides Hsp70 overregulation, Gpx1 and Sod1 downregulated expression was identified. Exposure to OTA feed causing an altered response to oxidative stress was evident in both sexes. The effects were similar, but the proteins involved were different, except for Prdx1 which resulted downregulation in both sexes. Its downregulation has been associated with the activation of the PI3K/AKT pathway and therefore the promotion of cancer . In addition, ATP synthase F1 subunit beta (Atp5f1b), an important protein for hepatic mitochondrial function, was significantly downregulated as much in males as in females. Several studies demonstrated that the reduction in its expression exacerbates mitochondrial dysfunction and oxidative stress . When rats were fed with AFB1 + OTA, changes occurred in the expression of proteins involved in metabolism such as the urea cycle, glycolysis/gluconeogenesis, and amino acid biosynthesis as in the AFB1 case. In addition, oxidative effects were observed. Specifically, in females, several antioxidant enzymes (Gta1, Gstm1, Sod1, and Cat) were downregulated. Along with these findings, the downregulation of these enzymes has been associated with the onset of diverse cancers . Only after exposure to both mycotoxins did a reduction in the expression of structural chromatin constituents occur. Reduced expression of these components could have a negative effect on the maintenance of genome integrity. After exposure to mycotoxins and the individual functional ingredient (FW), the response to xenobiotic stimulus emerged as the most significant BP in both males ( n = 18 to 20) and females ( n = 9 to 17) ( A,B). Likewise, similar findings were observed when rats were exposed to both functional ingredients . The response to xenobiotic stimulus was the most common biological process in male (n = 18–20) and female rats (n = 9), though it was more pronounced in males. According to that, it is well known that liver plays a crucial role in metabolizing and detoxifying xenobiotics via phase I and phase II enzymes, and many of the proteins involved are key players in the detoxification pathways activated by AFB1 and OTA exposure. In fact, the upregulation of oxidative proteins suggests higher oxidative stress in the liver, which could indeed be part of a positive feedback mechanism that the liver cells use to maintain homeostasis. While these parameters may not directly reflect hepatotoxicity, they offer valuable insight into the liver’s adaptive response to stress . Among the proteins affected, the most significantly altered were the mitochondrial enzyme involved in ketogenesis Hmgcs2, glutathione S-transferases (Gsta1, Gstm1, Mgst1) which conjugate toxic metabolites with glutathione to facilitate their excretion, and oxidative stress biomarkers superoxide dismutase 1 (Sod1) and catalase (Cat). Moreover, heat shock proteins such as Hspa8 and Hspd1 which protect cells from stress-induced damage were upregulated with the combination of toxins (Log Fold Change (FC) > 2) but not with the individual exposure (LogFC < −2). Additionally, enzymes implicated in energy metabolism and cellular repair (Aldh9a1, Adcy1) were strongly downregulated in the combined exposure (LogFC < −1.80) but not in the single ones, suggesting a synergistic effect of the toxins. According to that, previous studies confirmed the hepatotoxic modulation of xenobiotics metabolizing enzymes in the presence of AFB1, but at the same time, the capacity of coffee extracts to activate detoxifying enzymes for its degradation was demonstrated . Likewise, the degradation of AFB1 was recently proven by employing diverse bacteria species, as well as different waste products containing high amounts of phenolic compounds . Moreover, the modulation of the xenobiotic transformation system induced by OTA was involved in hepatic metabolism processes in vitro and in vivo . However, as in this case, it has been demonstrated that plant extracts and their bioactive compounds may act by inducing xenobiotic detoxification and biotransformation pathways . The following foremost BP was related to liver development with both functional ingredients and in the two sexes, reporting a number of 10 to 13 findings for males ( A and A) and 8 to 10 for females ( B and B). This biological mechanism is essential for growth, differentiation, and maturation of the liver and is tightly regulated by various signaling pathways and proteins that control cellular functions such as proliferation, differentiation, and metabolic adaptation. In the context of mycotoxin exposure, proteins such as Atp5f1b, UDP glucuronosyltransferase family 1 member A6 (Ugt1a6), adenylate kinase 2 (Ak2), and aldehyde dehydrogenase 9 family member A1 (Aldh9a1) and ornithine transcarbamylase (Otc) were downregulated when the rats were exposed to mycotoxins individually. Accordingly, diverse studies have reported the healthful effect of bioactive components contained in food in the increase in cellular antioxidant defense systems at the hepatic level . Therefore, these results indicate that the combined action of these bioactive ingredients may actively participate in favorable mitigation processes. However, when both AFB1 and OTA were administered together, the expression of these proteins was notably increased, suggesting an adaptive response by the liver to counteract the toxic effects and promote recovery. Additionally, BPs related to glutathione metabolism, apoptotic processes, gluconeogenesis, response to nutrients, and circadian rhythm were also affected, but in a lower manner. In terms of MFs ( C,D and C,D), identical protein binding was the most enriched function in both sexes (n = 30 to 48), followed by ATP binding (n = 18 to 34), enzyme binding (n = 11 to 20), and ATP hydrolysis activity (n = 10 to 20). To deepen these results, a heatmap was generated from the proteomic data to visually represent the general changes in protein expression following exposure to AFB1, OTA, and their combination (AFB1 + OTA) in the presence of FW or FW + P in both male and female rats compared to control feed . The heatmap displays downregulated proteins (green) and upregulated proteins (red), providing a clear overview of the proteomic response to mycotoxin exposure across the different conditions. The detailed list of DEPs altered in BPs is included in for male and for female. Steady outcomes were perceived in both sexes, revealing matching trends in protein expression. When exposing rats to each mycotoxin separately supplemented with FW or FW + P, the preponderant part of proteins displayed a moderate downregulation (LogFC < −2), particularly in the AFB1 group, hinting at the positive action of bioactive compounds against the toxin. However, occasionally with OTA, a few proteins were upregulated, especially in males and with both ingredients. Differently, when rats were exposed to the mycotoxin mixture, the expression profile outlined a significant upregulation (LogFC > 2), particularly in the combined group (AFB1 + OTA + FW + P). This condition exhibited a potential synergistic consequence of the two toxins, where the simultaneous exposure may exacerbate the biological response compared to a single one. In line with that, numerous investigations have previously reported an AFB1 and OTA additive effect in vivo and in vitro, emphasizing the potential risk of their co-occurrence . A recent metabolomic study, for instance, reported a synergistic effect of AFM1 and OTA in mice livers, displaying the alteration of metabolites related to oxidative stress . Understanding the mechanism of action of pathways involved in the primary functioning of the liver has helped to clarify the metabolic alterations that occur in the presence of AFB1 and OTA and, notably, verify the beneficial role of the functional ingredients. For this purpose, the KEGG visualization tool related to DEPs in this study allowed the identification of the main processes altered in rats exposed to feed tainted with mycotoxins and combined with FW or FW + P, reporting that the most significant signaling pathways affected were predominantly linked to metabolic responses . In fact, these routes showed the highest number of modified features, higher in males exposed to FW and AFB1 alone ( n = 54) or in combination with OTA ( n = 52) whereas, in females, they were lower with single mycotoxins ( n = 34) than combined ( n = 52). In the presence of pumpkin, the situation was reversed between the genders. However, it is well known that the liver is the largest metabolic organ which plays a specific role in digestion, metabolism, absorption, and transport of nutrients, biodegradation of toxic compounds, and processing of various hormones and cytokines secreted by the viscera . Moreover, it is essential for the biosynthesis of amino acids (AAs) that serve as the building blocks for several key proteins as well as being the center of glucose metabolism and fatty acid diverting . In the present investigation, carbon metabolism emerged among the most commonly altered pathways , along with biosynthesis of AA, suggesting consequent disruptions in energy production besides cellular processes. Indeed, one-carbon metabolic pathways aim to activate serine metabolism to glycine, the glycine cleavage system (GCS), and the metabolism of choline and other amino acids. For that reason, recent studies indicated that cancer cells may modify or become increasingly dependent on these pathways in order to maintain the supply of carbon units which are necessary for their proliferation . Additionally, chemical carcinogenesis of ROS and hepatocellular carcinoma were also impacted ( n > 10), particularly relevant given the liver’s central role in metabolizing AFB1 and OTA. Focusing once more on the overall expression of DEPs, the heatmap revealed a distinct pattern in protein expression across the multiple groups. The detailed list of DEPs altered in MPs is included in for male and for femal. In this case, when exposing rats to AFB1 + FW and AFB1 + FW + P, an extend downregulation can be observed compared with AFB1 (Log < −1.6). Very small differences between sexes were found in the downregulation trend observed with AFB1 exposure with the inclusion of functional ingredients compared with the mycotoxins only. With OTA, the downregulation trend can be observed more clearly when adding FW + P, especially in females . Nonetheless, in the latter, an upregulation of certain specimens is also displayed, reporting higher values in females (LogFC > 2.0) than in males (LogFC > 1.7). Conversely, when both AFB1 and OTA were administered together, a clear upregulation of proteins was observed, suggesting once again a synergistic effect between the two toxins. Nevertheless, the increase in protein expression in response to combined toxin exposure was slightly more pronounced in females (LogFC up to 4.7) ( A) than in males (LogFC up to 4.5) ( B), further supporting the hypothesis of a stronger synergistic effect in females. Thus, the contribution of FW or FW + P in the diet modulated the toxic effects of AFB1 and OTA when they were administered singularly, highlighting the potential for combined exposures to exert stronger effects than individual toxins. Remarkably, several proteins identified through the proteomic analysis were linked to the hepatocellular carcinoma (HCC) pathway, a key area of concern following exposure to mycotoxins. In fact, HCC is a primary form of liver cancer which often results from chronic exposure to various toxic agents and is the sixth most common malignancy worldwide . In this study, several proteins involved in liver function and cancer development were significantly affected by AFB1 and OTA exposure . Among them, important members of the actin family such as beta actin (Actb), beta-actin 2 (Actbl2), actin gamma 1–1 (Actg1l1), and actin gamma 1 (Actg1) often dysregulated in cancer were downregulated under exposure to individual mycotoxin and bioactive ingredients, hinting at their helpful role. In fact, these actin monomers are fundamental for cytoskeletal polymerization and integrity and are directly implicated in the maintenance of assembly and turnover of diverse cellular processes and were upregulated in male and female rats after exposure to mycotoxins individually. Among them, Actb, Actg1, and Actin 5 (Act5) were strongly upregulated after single administration (LogFC > 14). Additionally, antioxidant response proteins belonging to the glutathione S-transferase (GST) and NAD(P)H quinone oxidoreductase 1 (NQO) families were significantly downregulated in male and female rats, with LogFC < −2.6 for FW + AFB1 and LogFC < 1.8 with both functional ingredients. On the contrary, in the combined exposure (AFB1 + OTA), expression was increased (LogFC > 2), as was the case in the single mycotoxin administration of AFB1 (LogFC > 10), highlighting a shift in cellular signaling that could favor tumorigenesis. In fact, GST is a key regulator of phase II enzymes which protect cells from oxidative stress in cancer , and herein, six types of GST-related proteins were broadly altered: GSTa1, GSTa2, GSTa3, GSTm1, GSTm2, and MGST1. Likewise, the upregulation of NPQ1, as in this case, has been associated with human liver injury . Overall, the expression of the abovementioned proteins, particularly in combination with functional ingredients, could serve as potential biomarkers for liver carcinogenesis through the identification of important targets for therapeutic intervention. The present study highlights significant sex-specific differences in the hepatic response to mycotoxins (AFB1 and OTA) and their mitigation by bioactive compounds such as FW and P. These differences were evident in the number of differentially expressed proteins (DEPs), the biological processes (BPs) affected, and the pathways modulated under various experimental conditions. Female rats consistently exhibited a higher number of DEPs compared to males across all experimental groups. This disparity suggests a greater sensitivity of females to mycotoxin-induced hepatic changes. For instance, in response to AFB1 exposure, females exhibited 134 DEPs compared to 95 in males, while the combined AFB1 + OTA exposure amplified this effect further (140 DEPs in females vs. 81 in males). These findings align with previous reports suggesting that sex hormones may influence xenobiotic metabolism and the oxidative stress response, potentially rendering females more vulnerable to hepatotoxic effects . The observed downregulation of key antioxidant enzymes such as Gpx1, Sod1, and Cat in females further corroborates this hypothesis, as it indicates a diminished capacity to counteract oxidative stress. In contrast, males displayed a more robust response to xenobiotic stimuli, suggesting a higher activation of detoxification pathways mediated by phase I and phase II enzymes. Supplementation with FW or FW + P exhibited protective effects in both sexes, though the mechanisms and extent of mitigation differed. The response to xenobiotic stimuli emerged as a predominant biological process in males (18–20 proteins involved) compared to females (9–17 proteins), reflecting a sex-dependent variation in detoxification capacity. Conversely, females demonstrated a greater modulation of oxidative-stress-related pathways and metabolic processes, including amino acid biosynthesis and the urea cycle. Interestingly, the combination of FW and P enhanced the mitigation effects, with a higher number of DEPs observed in both sexes compared to FW alone. For instance, FW + P supplementation in AFB1-exposed females resulted in 137 DEPs, compared to 127 in males. These findings suggest a synergistic effect of FW and P in modulating hepatic responses to mycotoxins, particularly in pathways related to cellular repair and antioxidant defense. Combined exposure to AFB1 and OTA exacerbated the hepatotoxic effects, particularly in females, as evidenced by a more pronounced upregulation of proteins (LogFC up to 4.7 in females vs. 4.5 in males). This suggests a synergistic interaction between the two mycotoxins that overwhelms the hepatic defense mechanisms, especially in females. The adaptive response observed in females, characterized by an increase in structural chromatin proteins and metabolic enzymes, may represent an effort to counteract the heightened toxic burden. However, this response appears to be less effective compared to the more stable protein expression profiles observed in males. Proteomics studies of Wistar rats exposed to AFB1 and OTA with the addition of FW and P have highlighted their ability to counteract the negative effects of mycotoxins on hepatic responses, particularly in detoxification and development processes. Moreover, metabolic alterations induced by these toxins evidenced a significant variation in carbon metabolism and biosynthesis of AA, included in the liver’s main functions. Interestingly, important biomarkers implicated in HCC were positively modulated by functional ingredients in both males and in females, but only with mycotoxins individually. Based on these findings, the presence of FW or FW + P as functional ingredients in food may play a significant role in modulating toxic responses of mycotoxins, though further analysis is needed to fully elucidate the protective mechanisms. 4.1. Reagents For feed preparation, wheat flour, water, salt (NaCl), and sugar (sucrose) were acquired from a commercial market in Valencia, Spain. Aspergillus flavus ITEM 8111 was purchased from the Agro-Food Microbial Culture Collection of the Institute of Sciences and Food Production (ISPA, Bari, Italy) whereas Aspergillus steynii 20,510 was obtained from Spanish Type Culture Collection, CECT, Science Park of the University of Valencia (Paterna, Valencia, Spain). Goat milk whey coagulated by commercial rennet (starter culture R-604) was purchased from the ALCLIPOR society, S.A.L. (Benassal, Spain) while pumpkin used in this study was purchased from a supermarket (Valencia, Spain). It was peeled, the seeds removed, cut, and freeze-dried to then grind and obtain a homogeneous powder. For protein precipitation, extraction, and digestion, ethanol was supplied by Sigma-Aldrich (St. Louis, USA), and dithiothreitol (DTT) with a purity of 99%, Trizma ® hydrochloride, Tris-HCl with a purity of 99%, and trypsin were purchased from Sigma-Aldrich (St. Louis, MO, USA). Thiourea, purchased from Thermo Fisher Scientific (Kandel, Germany), and urea obtained from FEROSA (Barcelona, Spain) were used to prepare the lysis buffer used in protein digestion. Furthermore, iodoacetamide (IAA) with a purity of 98% was obtained from ACROS OrganicsTM, Thermo Fisher Scientific (Princeton, NJ, USA). Finally, for proteomics analysis, methanol was supplied by Sigma-Aldrich. Acetonitrile (AcN) LC/MS-grade OPTIMA ® (≥99.9% purity) was supplied by Fisher Chemical (Geel, Belgium). Formic acid (≥98%) was obtained from Sigma-Aldrich. Deionized water (<18, MΩcm resistivity) was obtained using a Milli-Q water purification system (Millipore, Bedford, MA, USA). 4.2. In Vivo Experimental Design Male and female Wistar rats (weighing between 260–340 g) were obtained from the pharmacy animal facility at the University of Valencia, Spain. At the beginning of the study, rats were housed in polycarbonate cages in a windowless room with a 12 h light/dark cycle. The room conditions were carefully controlled to meet the species’ requirements, with a temperature of 22 °C and relative humidity maintained between 45–65%. To ensure sterility during the procedures, nitrile gloves and FFP3 masks were worn when handling the animals or contaminated samples. This study was approved by the Animal Care and Use Committee of the University of Valencia (2021/VSC/PEA/0112). After seven days of acclimatation, a total of 120 Wistar rats were divided into 12 groups, each consisting of 10 rats (5 males and 5 females) for the corresponding feeds. Among them, four test groups received mycotoxins individually or in combination, four were fed FW-contaminated feed, and the other four FW-P-containing contaminated feed. For the feeds containing functional ingredients, 35 g of FW and P were added to each during the preparation. This amount represents 1% ( w / w ). The control group was fed uncontaminated feed. The experimental conditions related to mycotoxin doses and their respective standard deviations are reported by . The doses of aflatoxin B1 (AFB1) and ochratoxin A (OTA) used in the study were calculated based on the levels in the contaminated feed and the rats’ daily intake: AFB1 dose varies from 176 to 387 µg/kg body weight per day, depending on the experimental group and sex of the rats. The dose of OTA ranged from 162 to 552 µg/kg body weight per day, with females generally receiving higher doses than males due to differences in feed intake relative to body weight. These doses were derived from feed containing AFB1 and OTA at concentrations of approximately 4.3–5.2 µg/g for AFB1 and 5.4–8.8 µg/g for OTA and were adjusted for body weight and feed consumption to reflect realistic exposure scenarios. After 28 days, rats were sacrificed following isoflurane inhalation and organs were stored at −80 °C. 4.3. Protein Extraction, Reduction, Alkylation, and Digestion Protein extraction was initiated using 50 mg of liver tissue which was homogenized in MilliQ-H 2 O using an Ultra Turrax (IKA T10 standard). Afterwards, proteins were precipitated twice by adding 2 mL of cold ethanol to each sample, bringing the final volume to 2.5 mL. Samples were then centrifuged at 4.000 rpm, 4 °C for 15 min, the supernatant was discarded, and the pellets were resuspended in 500 μL of H 2 O. Protein concentration was determined using a NeoDot UV/Vis Nano Spectrometer(ref) in order to standardize the concentration to 1 mg/mL to start the digestion. Subsequently, samples were resolved in 200 μL of lysis buffer (8 M urea/2 M thiourea/50 mM Tris-HCl) and underwent reduction and alkylation by adding solutions of DTT and IAA at a concentration of 200 mM and pH 7.8, prepared with MilliQ-H 2 O and 0.4 M Tris stock buffer (pH 7.8, Tris base/MilliQ-H 2 O). To break disulfide bonds, samples were incubated with 5 μL of DTT 200 mM for 1 h at 60 °C in a ThermoMixer C (Eppendorf). Samples were then incubated for 30 min at 37 °C to alkylate protein cysteine residues with 20 μL of IAA. Finally, trypsin enzyme (1 mg/mL) was added to start peptide digestion which was carried out overnight at 37 °C. After that, the reaction was stopped by adding acetic acid 5% (pH 5) and filtered prior to LC-MS/MS-Q-TOF injection. 4.4. Identification and Quantification of Proteins Through LC-MS/MS-Q-TOF Two technical replicates of each biological sample (50 μg/mL) were injected into an LC system (Agilent 1200 LC) coupled to a triple quadrupole time-of-flight (Q-TOF) mass spectrometry (Agilent 6540 UHD) system using a C18 RP AdvanceBio capillary column for 2.7 μm, 120 Å, 2.1 × 150 mm peptide mapping. The method previously developed by was followed. Briefly, a nonlinear gradient of 40 min at a flow rate of 0.2 mL/min was utilized. Two different phases were used in the process: phase A (H 2 O in 0.1% formic acid) and phase B (acetonitrile in 0.1% formic acid). The elution gradient starts with 3% phase B for 1 min and increases to 40% at 21 min. In the next 3 min, it reaches 95% and was maintained during 1 min; Afterwards, it decreases to 3% for 6 min and maintained in the last 8 min. The experimental conditions were repeated three times independently. 4.5. Statistical Analysis and Bioinformatics The software Spectrum Mill MS Proteomics Workbench Package Rev B.06.00.201 (Agilent Technologies) was used to process the chromatographic spectra. This software is capable of analyzing data from high-quality spectra, reducing false positives, and identifying proteins and peptides by matching them with the UniProt database. Entities were then sorted by their frequency of occurrence across all replicates within each experimental group following the MS/MS parameters previously retrieved and verified by . Afterwards, the identified proteins were statistically filtered by using Mass Profiler Professional (MPP) software v15.0 (Agilent Technologies) and differences between the experimental mycotoxin and the control group were assessed using an unpaired t -test with Benjamini–Hochberg adjustment. Results with a FC ≥ 0.7 and a p -value < 0.05 were considered statistically significant and checked for the bioinformatics analysis, including the features which corresponded to the UniProt accession codes for Rattus norvegicus . Finally, the BPs, MFs, and metabolic pathways associated with these proteins were explored using the Database for Annotation, Visualization, and Integrated Discovery (DAVID). Graphical representations of the data were created with GraphPad Prism software version 8.0.0 (San Diego, CA, USA). The Venn diagram for DEPs was generated using the Venny 2.1 interactive tool . For feed preparation, wheat flour, water, salt (NaCl), and sugar (sucrose) were acquired from a commercial market in Valencia, Spain. Aspergillus flavus ITEM 8111 was purchased from the Agro-Food Microbial Culture Collection of the Institute of Sciences and Food Production (ISPA, Bari, Italy) whereas Aspergillus steynii 20,510 was obtained from Spanish Type Culture Collection, CECT, Science Park of the University of Valencia (Paterna, Valencia, Spain). Goat milk whey coagulated by commercial rennet (starter culture R-604) was purchased from the ALCLIPOR society, S.A.L. (Benassal, Spain) while pumpkin used in this study was purchased from a supermarket (Valencia, Spain). It was peeled, the seeds removed, cut, and freeze-dried to then grind and obtain a homogeneous powder. For protein precipitation, extraction, and digestion, ethanol was supplied by Sigma-Aldrich (St. Louis, USA), and dithiothreitol (DTT) with a purity of 99%, Trizma ® hydrochloride, Tris-HCl with a purity of 99%, and trypsin were purchased from Sigma-Aldrich (St. Louis, MO, USA). Thiourea, purchased from Thermo Fisher Scientific (Kandel, Germany), and urea obtained from FEROSA (Barcelona, Spain) were used to prepare the lysis buffer used in protein digestion. Furthermore, iodoacetamide (IAA) with a purity of 98% was obtained from ACROS OrganicsTM, Thermo Fisher Scientific (Princeton, NJ, USA). Finally, for proteomics analysis, methanol was supplied by Sigma-Aldrich. Acetonitrile (AcN) LC/MS-grade OPTIMA ® (≥99.9% purity) was supplied by Fisher Chemical (Geel, Belgium). Formic acid (≥98%) was obtained from Sigma-Aldrich. Deionized water (<18, MΩcm resistivity) was obtained using a Milli-Q water purification system (Millipore, Bedford, MA, USA). Male and female Wistar rats (weighing between 260–340 g) were obtained from the pharmacy animal facility at the University of Valencia, Spain. At the beginning of the study, rats were housed in polycarbonate cages in a windowless room with a 12 h light/dark cycle. The room conditions were carefully controlled to meet the species’ requirements, with a temperature of 22 °C and relative humidity maintained between 45–65%. To ensure sterility during the procedures, nitrile gloves and FFP3 masks were worn when handling the animals or contaminated samples. This study was approved by the Animal Care and Use Committee of the University of Valencia (2021/VSC/PEA/0112). After seven days of acclimatation, a total of 120 Wistar rats were divided into 12 groups, each consisting of 10 rats (5 males and 5 females) for the corresponding feeds. Among them, four test groups received mycotoxins individually or in combination, four were fed FW-contaminated feed, and the other four FW-P-containing contaminated feed. For the feeds containing functional ingredients, 35 g of FW and P were added to each during the preparation. This amount represents 1% ( w / w ). The control group was fed uncontaminated feed. The experimental conditions related to mycotoxin doses and their respective standard deviations are reported by . The doses of aflatoxin B1 (AFB1) and ochratoxin A (OTA) used in the study were calculated based on the levels in the contaminated feed and the rats’ daily intake: AFB1 dose varies from 176 to 387 µg/kg body weight per day, depending on the experimental group and sex of the rats. The dose of OTA ranged from 162 to 552 µg/kg body weight per day, with females generally receiving higher doses than males due to differences in feed intake relative to body weight. These doses were derived from feed containing AFB1 and OTA at concentrations of approximately 4.3–5.2 µg/g for AFB1 and 5.4–8.8 µg/g for OTA and were adjusted for body weight and feed consumption to reflect realistic exposure scenarios. After 28 days, rats were sacrificed following isoflurane inhalation and organs were stored at −80 °C. Protein extraction was initiated using 50 mg of liver tissue which was homogenized in MilliQ-H 2 O using an Ultra Turrax (IKA T10 standard). Afterwards, proteins were precipitated twice by adding 2 mL of cold ethanol to each sample, bringing the final volume to 2.5 mL. Samples were then centrifuged at 4.000 rpm, 4 °C for 15 min, the supernatant was discarded, and the pellets were resuspended in 500 μL of H 2 O. Protein concentration was determined using a NeoDot UV/Vis Nano Spectrometer(ref) in order to standardize the concentration to 1 mg/mL to start the digestion. Subsequently, samples were resolved in 200 μL of lysis buffer (8 M urea/2 M thiourea/50 mM Tris-HCl) and underwent reduction and alkylation by adding solutions of DTT and IAA at a concentration of 200 mM and pH 7.8, prepared with MilliQ-H 2 O and 0.4 M Tris stock buffer (pH 7.8, Tris base/MilliQ-H 2 O). To break disulfide bonds, samples were incubated with 5 μL of DTT 200 mM for 1 h at 60 °C in a ThermoMixer C (Eppendorf). Samples were then incubated for 30 min at 37 °C to alkylate protein cysteine residues with 20 μL of IAA. Finally, trypsin enzyme (1 mg/mL) was added to start peptide digestion which was carried out overnight at 37 °C. After that, the reaction was stopped by adding acetic acid 5% (pH 5) and filtered prior to LC-MS/MS-Q-TOF injection. Two technical replicates of each biological sample (50 μg/mL) were injected into an LC system (Agilent 1200 LC) coupled to a triple quadrupole time-of-flight (Q-TOF) mass spectrometry (Agilent 6540 UHD) system using a C18 RP AdvanceBio capillary column for 2.7 μm, 120 Å, 2.1 × 150 mm peptide mapping. The method previously developed by was followed. Briefly, a nonlinear gradient of 40 min at a flow rate of 0.2 mL/min was utilized. Two different phases were used in the process: phase A (H 2 O in 0.1% formic acid) and phase B (acetonitrile in 0.1% formic acid). The elution gradient starts with 3% phase B for 1 min and increases to 40% at 21 min. In the next 3 min, it reaches 95% and was maintained during 1 min; Afterwards, it decreases to 3% for 6 min and maintained in the last 8 min. The experimental conditions were repeated three times independently. The software Spectrum Mill MS Proteomics Workbench Package Rev B.06.00.201 (Agilent Technologies) was used to process the chromatographic spectra. This software is capable of analyzing data from high-quality spectra, reducing false positives, and identifying proteins and peptides by matching them with the UniProt database. Entities were then sorted by their frequency of occurrence across all replicates within each experimental group following the MS/MS parameters previously retrieved and verified by . Afterwards, the identified proteins were statistically filtered by using Mass Profiler Professional (MPP) software v15.0 (Agilent Technologies) and differences between the experimental mycotoxin and the control group were assessed using an unpaired t -test with Benjamini–Hochberg adjustment. Results with a FC ≥ 0.7 and a p -value < 0.05 were considered statistically significant and checked for the bioinformatics analysis, including the features which corresponded to the UniProt accession codes for Rattus norvegicus . Finally, the BPs, MFs, and metabolic pathways associated with these proteins were explored using the Database for Annotation, Visualization, and Integrated Discovery (DAVID). Graphical representations of the data were created with GraphPad Prism software version 8.0.0 (San Diego, CA, USA). The Venn diagram for DEPs was generated using the Venny 2.1 interactive tool .
Field experiments show no consistent reductions in soil microbial carbon in response to warming
f9352f51-2af3-419a-9fcd-e57bd6d35330
10899254
Microbiology[mh]
Methodology overview According to Patoine et al. , MBC showed a significant decreasing trend from 1992 to 2013, which was almost entirely attributed to climate change, with little contribution from land cover change. They further concluded that the climate contribution was dominated by increasing temperature rather than the change in precipitation (their Supplementary Figs. and ). This conclusion is in line with their Supplementary Fig. and Supplementary Fig. , which show a clear decrease in MBC with increasing annual temperature, but no clear trend, or only a very slight increasing one, in MBC with increasing precipitation. Given these pieces of evidence, we decided to focus on the temperature effect on MBC in this analysis. Here, we focus on testing three hypotheses: (1) The MBC response to warming reported by Patoine et al. should be detectable using field warming experiments, which have been widely adopted to examine how MBC responds to temperature increase. (2) Similarly, we hypothesize that the response could probably also be found in in-situ long-term MBC measurements affected by interannual temperature changes. (3) Given that the Random Forest model used to predict MBC change during 1992–2013 by Patoine et al. was trained using largely static observations of MBC stock across spatial gradients, and that a clear spatial pattern of MBC stock exists across different climatic gradients (their Fig. ), we hypothesize that the conclusion of Patoine et al. might be subject to the space-for-time substitution (SFT) effect, in which case the predicted reduction over time could be an artifact of decreasing MBC stocks with increasing temperature over spatial gradients. To test the initial two hypotheses, we compiled observations from field warming experiments and in-situ long-term measurements from the literature. To test the third one, we repeated the Random Forest model training followed by prediction of MBC change for 1992–2013 following the same method as Patoine et al. , but used bootstrapping sub-sampling to obtain variations in both the predicted MBC change rate and the spatial slope between MBC and temperature, and further examined how the predicted MBC change rate responds to the derived spatial slope. Analysis using field warming experiment data A systematic, reproducible workflow was followed to ensure the suitability and completeness of field warming experiment data included in this study (Supplementary Fig. ). Laboratory controlled warming experiments were excluded because they reflect the real world less realistically. Peer-reviewed articles on soil warming effects on microbial soil biomass were collected from a literature search using “soil warm” and “microbial biomass” as keywords in ScienceDirect ( https://www.sciencedirect.com/ ), China National Knowledge Infrastructure (CNKI, https://www.cnki.net/ ), Google Scholar, and papers cited in previous review studies. By observing the criteria for an article to be included (Supplementary Fig. ), a total of 130 paired MBC measurements from both control and warming sites from 69 papers were collected (Fig. ). To evaluate how MBC responds to soil warming, the effect of warming on MBC was calculated for each pair of measurements using the natural log-transformed response ratio (LN(RR)): 1 [12pt]{minimal} $${{{{{}}}}}({{{{{}}}}})={{{{}}}}({{{{{{}}}}}}_{{{{{{}}}}}})-{{{{}}}}({{{{{{}}}}}}_{{{{{{}}}}}})$$ LN ( RR ) = ln ( MBC t ) − ln ( MBC c ) Where MBC t and MBC c represent MBC from the warming and control treatments, respectively, and the response ratio (RR) was natural-log transformed, a common practice to make it satisfy the normal distribution . As LN(RR) seems larger for intermediate warming levels compared to either the low or high warming magnitude, potential effects of warming magnitudes on LN(RR) were examined using a quadratic fitting between LN(RR) and warming magnitude (R 2 adj = 0.23, p < 0.01, Supplementary Fig. ). The MBC response to soil warming was also examined in detail by separating all field-warming observations into different groups of warming magnitude (<1 °C, 1–2 °C, 2–3 °C, 3–4 °C, and 4–5 °C). The random-effect model was used to obtain the overall effect of warming on MBC and test its statistical significance (Fig. ). Funnel plots and the “metabias” method from the ‘meta’ package in R were employed to investigate potential publication bias for each warming magnitude group (Supplementary Fig. ). If the funnel plot shows significant asymmetry (i.e., p < 0.05 derived using the “Egger” test from the “metabias” method), then an iterative “trim-and-fill” method was used to remove the most extreme publication(s) from either the left or the right tail of the funnel plot until it becomes symmetric, and then to fill imputed missing publication(s) followed by computation of a new effect size of MBC response to warming. The impacts of warming duration on MBC responses were examined similarly by grouping into different durations of <3 years, 3–6 years and 6–30 years. Analysis using in-situ long-term MBC measurements We initially searched the MBC datasets used by Patoine et al. and used in a systematic analysis by Xu et al. for in-situ long-term MBC measurements, but found only one study (Supplementary Table and Supplementary Fig. ) meeting our criteria. A subsequent systematic search in ScienceDirect, CNKI, and Google Scholar using the search terms “long-term soil microbial biomass carbon” and “soil microbial biomass carbon interannual variability” retrieved another five studies which met our criteria – (Supplementary Table ). For each site, annual temperatures corresponding to the observation years were retrieved from the WorldClim dataset using the recorded site location information and a linear relationship between the observed MBC and annual temperature was fitted to examine its response to changes in temperature (Fig. ). Testing the space-for-time substitution (SFT) effect in Patoine et al. According to the SFT hypothesis described above, greater predicted reductions in the global MBC are to be expected when the approach of Patoine et al. is applied to subsets of the observation data if they have steeper spatial negative slopes between MBC and temperature. Bootstrapping sub-sampling was used to verify this hypothesis: (1) 500 MBC observations were randomly taken (with replacement) from the original MBC dataset of Patoine et al. ( n = 762) by sampling 200 times. Following the method described in Patoine et al. , a Random Forest model was trained following each sub-sampling and was then used to predict global MBC for 1992–2013. For each sub-sample, the slope between MBC and annual temperature was also derived using a simple linear regression. Finally, the relationship between the predicted MBC change rate and the slope of MBC against temperature was examined. (2) similar to (1), but the dataset for sub-sampling was the dataset of Patoine et al. combined with the MBC observations from the control treatment of the field-warming dataset ( n = 762 + 106). Only MBC observations reported in units that could be converted to mmol kg -1 were used, resulting in 106 measurements. The same procedure as used by Patoine et al. was then followed to derive soil MBC stocks. In both tests, following Patoine et al. , environmental variables of annual temperature, soil organic carbon, soil pH, precipitation, soil clay content, soil sand content, land-cover, soil nitrogen content, NDVI, and elevation were used as predictor variables in the Random Forest modeling. Values for these variables corresponding to the 106 control MBC measurements were extracted from the same global datasets used by Patoine et al. based on site geolocations. To account for only those spatial grid cells where the coverage of environmental variables allows a high-confidence prediction of MBC, the spatial coverage analysis was performed for each bootstrapping sub-sampling (for both n = 762 and n = 762 + 106) following the approach of Patoine et al. (i.e., the ‘Mahalanobis distance’ approach and the ‘dissimilarity index’ approach). The results obtained by using different layers of valid pixels for model prediction for different bootstrapping sub-samplings are shown in Fig. . An alternative approach, using a single shared layer of valid pixels containing only collocating valid pixels of all the 200 bootstrapping sub-samplings, yielded similar results (Supplementary Fig. ). According to Patoine et al. , MBC showed a significant decreasing trend from 1992 to 2013, which was almost entirely attributed to climate change, with little contribution from land cover change. They further concluded that the climate contribution was dominated by increasing temperature rather than the change in precipitation (their Supplementary Figs. and ). This conclusion is in line with their Supplementary Fig. and Supplementary Fig. , which show a clear decrease in MBC with increasing annual temperature, but no clear trend, or only a very slight increasing one, in MBC with increasing precipitation. Given these pieces of evidence, we decided to focus on the temperature effect on MBC in this analysis. Here, we focus on testing three hypotheses: (1) The MBC response to warming reported by Patoine et al. should be detectable using field warming experiments, which have been widely adopted to examine how MBC responds to temperature increase. (2) Similarly, we hypothesize that the response could probably also be found in in-situ long-term MBC measurements affected by interannual temperature changes. (3) Given that the Random Forest model used to predict MBC change during 1992–2013 by Patoine et al. was trained using largely static observations of MBC stock across spatial gradients, and that a clear spatial pattern of MBC stock exists across different climatic gradients (their Fig. ), we hypothesize that the conclusion of Patoine et al. might be subject to the space-for-time substitution (SFT) effect, in which case the predicted reduction over time could be an artifact of decreasing MBC stocks with increasing temperature over spatial gradients. To test the initial two hypotheses, we compiled observations from field warming experiments and in-situ long-term measurements from the literature. To test the third one, we repeated the Random Forest model training followed by prediction of MBC change for 1992–2013 following the same method as Patoine et al. , but used bootstrapping sub-sampling to obtain variations in both the predicted MBC change rate and the spatial slope between MBC and temperature, and further examined how the predicted MBC change rate responds to the derived spatial slope. A systematic, reproducible workflow was followed to ensure the suitability and completeness of field warming experiment data included in this study (Supplementary Fig. ). Laboratory controlled warming experiments were excluded because they reflect the real world less realistically. Peer-reviewed articles on soil warming effects on microbial soil biomass were collected from a literature search using “soil warm” and “microbial biomass” as keywords in ScienceDirect ( https://www.sciencedirect.com/ ), China National Knowledge Infrastructure (CNKI, https://www.cnki.net/ ), Google Scholar, and papers cited in previous review studies. By observing the criteria for an article to be included (Supplementary Fig. ), a total of 130 paired MBC measurements from both control and warming sites from 69 papers were collected (Fig. ). To evaluate how MBC responds to soil warming, the effect of warming on MBC was calculated for each pair of measurements using the natural log-transformed response ratio (LN(RR)): 1 [12pt]{minimal} $${{{{{}}}}}({{{{{}}}}})={{{{}}}}({{{{{{}}}}}}_{{{{{{}}}}}})-{{{{}}}}({{{{{{}}}}}}_{{{{{{}}}}}})$$ LN ( RR ) = ln ( MBC t ) − ln ( MBC c ) Where MBC t and MBC c represent MBC from the warming and control treatments, respectively, and the response ratio (RR) was natural-log transformed, a common practice to make it satisfy the normal distribution . As LN(RR) seems larger for intermediate warming levels compared to either the low or high warming magnitude, potential effects of warming magnitudes on LN(RR) were examined using a quadratic fitting between LN(RR) and warming magnitude (R 2 adj = 0.23, p < 0.01, Supplementary Fig. ). The MBC response to soil warming was also examined in detail by separating all field-warming observations into different groups of warming magnitude (<1 °C, 1–2 °C, 2–3 °C, 3–4 °C, and 4–5 °C). The random-effect model was used to obtain the overall effect of warming on MBC and test its statistical significance (Fig. ). Funnel plots and the “metabias” method from the ‘meta’ package in R were employed to investigate potential publication bias for each warming magnitude group (Supplementary Fig. ). If the funnel plot shows significant asymmetry (i.e., p < 0.05 derived using the “Egger” test from the “metabias” method), then an iterative “trim-and-fill” method was used to remove the most extreme publication(s) from either the left or the right tail of the funnel plot until it becomes symmetric, and then to fill imputed missing publication(s) followed by computation of a new effect size of MBC response to warming. The impacts of warming duration on MBC responses were examined similarly by grouping into different durations of <3 years, 3–6 years and 6–30 years. We initially searched the MBC datasets used by Patoine et al. and used in a systematic analysis by Xu et al. for in-situ long-term MBC measurements, but found only one study (Supplementary Table and Supplementary Fig. ) meeting our criteria. A subsequent systematic search in ScienceDirect, CNKI, and Google Scholar using the search terms “long-term soil microbial biomass carbon” and “soil microbial biomass carbon interannual variability” retrieved another five studies which met our criteria – (Supplementary Table ). For each site, annual temperatures corresponding to the observation years were retrieved from the WorldClim dataset using the recorded site location information and a linear relationship between the observed MBC and annual temperature was fitted to examine its response to changes in temperature (Fig. ). According to the SFT hypothesis described above, greater predicted reductions in the global MBC are to be expected when the approach of Patoine et al. is applied to subsets of the observation data if they have steeper spatial negative slopes between MBC and temperature. Bootstrapping sub-sampling was used to verify this hypothesis: (1) 500 MBC observations were randomly taken (with replacement) from the original MBC dataset of Patoine et al. ( n = 762) by sampling 200 times. Following the method described in Patoine et al. , a Random Forest model was trained following each sub-sampling and was then used to predict global MBC for 1992–2013. For each sub-sample, the slope between MBC and annual temperature was also derived using a simple linear regression. Finally, the relationship between the predicted MBC change rate and the slope of MBC against temperature was examined. (2) similar to (1), but the dataset for sub-sampling was the dataset of Patoine et al. combined with the MBC observations from the control treatment of the field-warming dataset ( n = 762 + 106). Only MBC observations reported in units that could be converted to mmol kg -1 were used, resulting in 106 measurements. The same procedure as used by Patoine et al. was then followed to derive soil MBC stocks. In both tests, following Patoine et al. , environmental variables of annual temperature, soil organic carbon, soil pH, precipitation, soil clay content, soil sand content, land-cover, soil nitrogen content, NDVI, and elevation were used as predictor variables in the Random Forest modeling. Values for these variables corresponding to the 106 control MBC measurements were extracted from the same global datasets used by Patoine et al. based on site geolocations. To account for only those spatial grid cells where the coverage of environmental variables allows a high-confidence prediction of MBC, the spatial coverage analysis was performed for each bootstrapping sub-sampling (for both n = 762 and n = 762 + 106) following the approach of Patoine et al. (i.e., the ‘Mahalanobis distance’ approach and the ‘dissimilarity index’ approach). The results obtained by using different layers of valid pixels for model prediction for different bootstrapping sub-samplings are shown in Fig. . An alternative approach, using a single shared layer of valid pixels containing only collocating valid pixels of all the 200 bootstrapping sub-samplings, yielded similar results (Supplementary Fig. ). Supplementary Information
Biological Control of
f7f72392-f532-4ffb-8449-37dd16540c72
11124295
Microbiology[mh]
Escherichia coli O157:H7, a leading foodborne pathogen, is commonly shed in the feces of cattle and other food-producing animals. Numerous studies have reported the prolonged survival of E. coli O157:H7 in raw manure, thereby heightening the risk of its transmission into the food chain and posing a public health threat . Indeed, outbreaks of E. coli O157:H7 infections have frequently been linked to the consumption of fresh produce or other food products directly or indirectly contaminated by water or manure containing this foodborne pathogen . Due to the presence of human pathogens in raw animal wastes, the proper composting of these wastes and handling of the finished products are critical for ensuring the safety of fresh produce production when the animal manure-based compost is used as a fertilizer and biological soil amendment. Importantly, the Food and Drug Administration’s (FDA) Food Safety Modernization Act (FSMA) Produce Safety Rule has placed limitations on the use of raw manure and has also established microbial standards for composted manure used on crops produced for direct human consumption . Composting is an aerobic process during which organic waste is biologically degraded by microorganisms to humus-like material. Both bacteria and fungi are present and active in a typical composting process . Most of the foodborne pathogens inherently present in the raw manure are inactivated during the thermophilic phase due to high temperature . Furthermore, compost contains a wealth of microbial species; however, these organisms face fierce competition within their environment. Compost microorganisms can interact synergistically or compete for the available nutrients . In this complex ecosystem, it is likely that some microorganisms have acquired protective features, such as the secretion of biocidal compounds. Bacterial competition in the environment can be classified as exploitative competition, where bacteria utilize limited nutrients or compete for colonizing sites, thereby depriving fellow microorganisms of the same genotypes, and interference competition, where cell damage occurs via the release of bioactive compounds by other microorganisms . As a result, certain populations of compost microflora may possess antimicrobial activities against harmful human pathogens. Biocontrol of foodborne pathogens in agricultural settings, such as animal production, fresh produce fields, and food processing environments, has been reported . This approach seems feasible since these microorganisms originated from agricultural environments and would be adapted to their native environment. Another advantage is that, ultimately, the usage of biocontrol agents against foodborne pathogens leads to less reliance on harmful chemicals and sanitizers by the food industry. The objective of this study was to isolate microorganisms from compost samples that produce metabolites bacteriostatic or bactericidal to E. coli O157:H7 and then determine their ability to inhibit the growth of the pathogen in dairy compost under laboratory and greenhouse conditions. 2.1. Bacterial Strains and Culture Conditions Due to strain variation in growth parameters and persistence, a cocktail of three to five E. coli O157:H7 strains was used for this study. Five E. coli O157:H7 strains (spinach outbreak strain F06M-0923-21 and Taco John’s outbreak strain F07M-020-1, both obtained from California Department of Health , avirulent strain B6914 stx 1 − and 2 − obtained from Dr. Pina Fratamico, USDA-ARS-ERRC , and avirulent strains MD46 and MD47 obtained from Dr. Mike Doyle at the University of Georgia) were used in this study . To differentiate from the competitive exclusion (CE) strains or the compost microflora, all tested E. coli O157:H7 strains were induced to be rifampicin-resistant via the gradient plate method , and no antagonistic effect was observed among these strains. Prior to each experiment, the strains from the freezer stocks were streaked on Tryptic Soy Agar supplemented with 100 µg mL −1 rifampicin (Fisher Scientific, Fair Lawn, NJ, USA) (TSA-R) plates and incubated at 35 °C for 24 h. Single colonies were inoculated into Tryptic Soy Broth (TSB) without glucose, grown to an early stationary phase, and used in further experiments. 2.2. Competitive Exclusion Microorganism Isolation and Culture Conditions The CE strains were isolated from 31 samples of finished composts, including poultry litter-, dairy manure- and plant wastes-based as described previously . Briefly, 9 mL of universal pre-enrichment broth (UPB) was added to each compost sample (ca. 1 g), and the mixtures were serially diluted (1:10) in phosphate-buffered saline (PBS). A volume of 0.1 mL of each dilution was plated in duplicate on tryptone, yeast extract, proteose peptone 3 agar plates (TYP) containing proteose peptone 3 (5 g L −1 ), tryptone (5 g L −1 ), yeast extract (5 g L −1 ), sodium chloride (8 g L −1 ), and agar (17 g L −1 ) and incubated at room temperature. The colonies were randomly selected from plates and streaked several times for isolation. Two methods, including a spot-on-lawn assay and liquid co-culture experiments, were used to screen isolates for antimicrobial activity against E. coli O157:H7 strains. In addition to selection at room temperature, some isolates were tested for antimicrobial activity at 42 °C. For the spot-on-lawn assay, 0.1 mL of approximately 10 7 CFU ml −1 cells of the 3-strain cocktail of E. coli O157:H7 (F06M-0923-21, F07M-020-1, and B6914) were plated in duplicate onto the surface of TYP plate. Putative CE isolates were grown individually on TYP plates at 25 °C for 48 h; then, a single colony was replica-plated on a sterile TYP plate and a TYP plate containing E. coli O157:H7 strains as the indicator microorganism. The plates were incubated at 25 °C for 48 h and then observed for zones of inhibition. The CE isolates were selected for a liquid co-culture experiment based on their antimicrobial activity against E. coli O157:H7 expressed as a clear inhibition zone. For the liquid co-culture experiments, E. coli O157:H7 strain B6914 was grown to stationary phase at room temperature on a rotary shaker in TYP broth. The putative CE isolates were grown in similar conditions. To test the inhibitory capacity in TYP broth, CE isolates were inoculated in equal concentration (ca. 10 2 CFU/mL) with the target E. coli O157:H7. Concurrently, individual CE strains and E. coli O157:H7 strain were inoculated in TYP broth separately and monitored for growth. Samples were collected at selected intervals and plated on TSA-R to enumerate only E. coli O157:H7 or TSA for CE isolates. 2.3. Species Identification by Amplifying the 16S rRNA Gene The DNA of potential CE isolates from the compost samples was extracted using the UltraClean TM Microbial DNA Isolation Kit (Mo-Bio Laboratories, Inc., Carlsbad, CA, USA) as described in the manufacturer’s instructions. Isolates were identified by PCR amplification of 16S rRNA genes using universal primers and sequenced by Eurofins Genomics (Louisville, KY, USA) as described previously . The forward primer ENV1 (5′-AGA GTT TGA TII TGG CTC AG-3′) targets positions 8–27 of E. coli 16S rRNA, whereas the reverse primer ENV2 (5′-CGG ITA CCT TGT TAC GAC TT-30′) corresponds to positions 1511–1492 . PCR reagents were used as a negative control, while the E. coli O157:H7 DNA was used as a positive control. The bacterial species was identified using BLAST (NSBI) and The Ribosomal Database Program . 2.4. Compost Inoculation, Sampling and Bacterial Enumeration Finished dairy waste—based compost (Black Kow ® , Black Gold Compost Co., Oxford, FL, USA) was used to determine the efficacy of CE strains against E. coli O157:H7 under both laboratory and greenhouse conditions. Prior to experiments, large particles present in compost samples were removed by sieving (sieve pore size, 0.3 × 0.3 cm). Compost was placed in sterile containers under refrigeration conditions and used for further experiments. 2.4.1. E. coli O157:H7 Growth under Laboratory Conditions The selected CE strains ( n = 3) were grown in TSB without glucose to the early stationary phase and then centrifuged and washed twice with 0.8% saline solution. To determine the effectiveness of CE on E. coli O157 inhibition in the compost, about 4 logs of the 3-strain cocktail of CE cultures were inoculated into the above compost containing ca. 6 logs of indigenous microorganisms using the spraying method . The CE-inoculated compost was then adjusted with sterile tap water to different moisture contents (20, 30, and 40%) and then acclimated at room temperature for 24 h. The overnight cultures of three rifampicin-resistant E. coli O157:H7 strains (F06M-0923-21, F07M-020-1, and B6914) grown in TSB-R broth were washed with saline and then inoculated to the CE-inoculated compost at an initial concentration of ca. 2 log CFU/g, and the inoculated samples were then stored at temperatures of 22 or 30 °C. At selected intervals, compost samples were enumerated for E. coli O157:H7 on TSA-R plates. 2.4.2. E. coli O157:H7 Growth under Greenhouse Conditions Two experimental approaches were conducted in the greenhouse. Both CE strains and E. coli O157:H7 strains were prepared as described above. The first approach was to simulate pathogen contamination of the finished compost. Briefly, the finished compost with adjusted moisture levels of 20, 30, and 40% were first inoculated (at a ratio of 1:10 v/wt) with the 10-strain cocktail of CE cultures to reach ca. 10 8 –10 9 CFU g −1 . After 24 h, the compost was inoculated with a cocktail of three avirulent E. coli O157:H7 strains (B6914, MD46, and MD47) at ca. 10 5 –10 6 CFU g −1 . Samples consisted of (i) compost inoculated only with E. coli O157:H7 cocktail, (ii) compost inoculated only with CE cocktail, (iii) compost inoculated with both E. coli O157:H7 and CE cocktail, and (iv) uninoculated compost. The second approach was to simulate the survival of the pathogen during thermophilic composting. To prepare for heat-adapted cells in compost, above-avirulent E. coli O157:H7 cocktail strains were inoculated (1:10 v/wt) to the finished compost with 40% MC, subjected to heat at 48 °C for 30 min and then inoculated further at a ratio of 1:10 wt/wt in compost samples with 40, 30, and 20% MC. After 24 h incubation at room temperature, the E. coli O157:H7 inoculated compost samples were inoculated (1:10 v/wt) with the 10-strain cocktail of CE cultures to reach ca.10 8 –10 9 CFU g −1 . Four treatments of compost samples were prepared the same as described in the first approach. For both approaches, two independent experiments were performed in triplicate. Experiments were performed as follows: Summer trials (August–September), Fall trials (October–December), and Winter trials (February–March) inside a greenhouse. Sterile cups containing compost samples were arranged in large plastic containers, and a digital hydrothermometer (EU 620-0915; VWR International, Radnor, PA, USA) for temperature and relative humidity was placed inside. Containers had recipients with saturated KCl solution and were closed every evening and opened in the morning. The moisture levels of the samples were adjusted every evening based on weight loss. Adjustment in the morning was not necessary since there was little moisture loss due to the overnight storage in high relative humidity. Therefore, samples were subjected to lower temperatures and high relative humidity overnight and high temperatures and decreased humidity during the day. Treatments were sampled on day 2 then every 4 days and analyzed for moisture content (the moisture levels of the samples were adjusted every day in the greenhouse for all samples) and bacterial enumeration. Briefly, 5 g of inoculated compost was mixed and homogenized with 45 mL of PBS in a sterile stomacher bag. The samples were then serially diluted and plated on TSA-R for the enumeration of E. coli O157:H7 or TSA for the enumeration of CE or the compost microflora. Data obtained from bacterial enumeration were expressed as log CFU per gram dry weight (CFU g/dw), and the detection limit of the plating method was approximately 100 CFU g/dw . 2.5. Statistical Analysis The analysis of pathogen survival data was performed using JMP 11.2.1 (SAS Institute Inc., Atlanta, GA, USA). Analysis of variance (ANOVA), followed by the least significant differences (LSD) test, was carried out to determine whether significant differences ( p < 0.05) existed among different treatments. Due to strain variation in growth parameters and persistence, a cocktail of three to five E. coli O157:H7 strains was used for this study. Five E. coli O157:H7 strains (spinach outbreak strain F06M-0923-21 and Taco John’s outbreak strain F07M-020-1, both obtained from California Department of Health , avirulent strain B6914 stx 1 − and 2 − obtained from Dr. Pina Fratamico, USDA-ARS-ERRC , and avirulent strains MD46 and MD47 obtained from Dr. Mike Doyle at the University of Georgia) were used in this study . To differentiate from the competitive exclusion (CE) strains or the compost microflora, all tested E. coli O157:H7 strains were induced to be rifampicin-resistant via the gradient plate method , and no antagonistic effect was observed among these strains. Prior to each experiment, the strains from the freezer stocks were streaked on Tryptic Soy Agar supplemented with 100 µg mL −1 rifampicin (Fisher Scientific, Fair Lawn, NJ, USA) (TSA-R) plates and incubated at 35 °C for 24 h. Single colonies were inoculated into Tryptic Soy Broth (TSB) without glucose, grown to an early stationary phase, and used in further experiments. The CE strains were isolated from 31 samples of finished composts, including poultry litter-, dairy manure- and plant wastes-based as described previously . Briefly, 9 mL of universal pre-enrichment broth (UPB) was added to each compost sample (ca. 1 g), and the mixtures were serially diluted (1:10) in phosphate-buffered saline (PBS). A volume of 0.1 mL of each dilution was plated in duplicate on tryptone, yeast extract, proteose peptone 3 agar plates (TYP) containing proteose peptone 3 (5 g L −1 ), tryptone (5 g L −1 ), yeast extract (5 g L −1 ), sodium chloride (8 g L −1 ), and agar (17 g L −1 ) and incubated at room temperature. The colonies were randomly selected from plates and streaked several times for isolation. Two methods, including a spot-on-lawn assay and liquid co-culture experiments, were used to screen isolates for antimicrobial activity against E. coli O157:H7 strains. In addition to selection at room temperature, some isolates were tested for antimicrobial activity at 42 °C. For the spot-on-lawn assay, 0.1 mL of approximately 10 7 CFU ml −1 cells of the 3-strain cocktail of E. coli O157:H7 (F06M-0923-21, F07M-020-1, and B6914) were plated in duplicate onto the surface of TYP plate. Putative CE isolates were grown individually on TYP plates at 25 °C for 48 h; then, a single colony was replica-plated on a sterile TYP plate and a TYP plate containing E. coli O157:H7 strains as the indicator microorganism. The plates were incubated at 25 °C for 48 h and then observed for zones of inhibition. The CE isolates were selected for a liquid co-culture experiment based on their antimicrobial activity against E. coli O157:H7 expressed as a clear inhibition zone. For the liquid co-culture experiments, E. coli O157:H7 strain B6914 was grown to stationary phase at room temperature on a rotary shaker in TYP broth. The putative CE isolates were grown in similar conditions. To test the inhibitory capacity in TYP broth, CE isolates were inoculated in equal concentration (ca. 10 2 CFU/mL) with the target E. coli O157:H7. Concurrently, individual CE strains and E. coli O157:H7 strain were inoculated in TYP broth separately and monitored for growth. Samples were collected at selected intervals and plated on TSA-R to enumerate only E. coli O157:H7 or TSA for CE isolates. The DNA of potential CE isolates from the compost samples was extracted using the UltraClean TM Microbial DNA Isolation Kit (Mo-Bio Laboratories, Inc., Carlsbad, CA, USA) as described in the manufacturer’s instructions. Isolates were identified by PCR amplification of 16S rRNA genes using universal primers and sequenced by Eurofins Genomics (Louisville, KY, USA) as described previously . The forward primer ENV1 (5′-AGA GTT TGA TII TGG CTC AG-3′) targets positions 8–27 of E. coli 16S rRNA, whereas the reverse primer ENV2 (5′-CGG ITA CCT TGT TAC GAC TT-30′) corresponds to positions 1511–1492 . PCR reagents were used as a negative control, while the E. coli O157:H7 DNA was used as a positive control. The bacterial species was identified using BLAST (NSBI) and The Ribosomal Database Program . Finished dairy waste—based compost (Black Kow ® , Black Gold Compost Co., Oxford, FL, USA) was used to determine the efficacy of CE strains against E. coli O157:H7 under both laboratory and greenhouse conditions. Prior to experiments, large particles present in compost samples were removed by sieving (sieve pore size, 0.3 × 0.3 cm). Compost was placed in sterile containers under refrigeration conditions and used for further experiments. 2.4.1. E. coli O157:H7 Growth under Laboratory Conditions The selected CE strains ( n = 3) were grown in TSB without glucose to the early stationary phase and then centrifuged and washed twice with 0.8% saline solution. To determine the effectiveness of CE on E. coli O157 inhibition in the compost, about 4 logs of the 3-strain cocktail of CE cultures were inoculated into the above compost containing ca. 6 logs of indigenous microorganisms using the spraying method . The CE-inoculated compost was then adjusted with sterile tap water to different moisture contents (20, 30, and 40%) and then acclimated at room temperature for 24 h. The overnight cultures of three rifampicin-resistant E. coli O157:H7 strains (F06M-0923-21, F07M-020-1, and B6914) grown in TSB-R broth were washed with saline and then inoculated to the CE-inoculated compost at an initial concentration of ca. 2 log CFU/g, and the inoculated samples were then stored at temperatures of 22 or 30 °C. At selected intervals, compost samples were enumerated for E. coli O157:H7 on TSA-R plates. 2.4.2. E. coli O157:H7 Growth under Greenhouse Conditions Two experimental approaches were conducted in the greenhouse. Both CE strains and E. coli O157:H7 strains were prepared as described above. The first approach was to simulate pathogen contamination of the finished compost. Briefly, the finished compost with adjusted moisture levels of 20, 30, and 40% were first inoculated (at a ratio of 1:10 v/wt) with the 10-strain cocktail of CE cultures to reach ca. 10 8 –10 9 CFU g −1 . After 24 h, the compost was inoculated with a cocktail of three avirulent E. coli O157:H7 strains (B6914, MD46, and MD47) at ca. 10 5 –10 6 CFU g −1 . Samples consisted of (i) compost inoculated only with E. coli O157:H7 cocktail, (ii) compost inoculated only with CE cocktail, (iii) compost inoculated with both E. coli O157:H7 and CE cocktail, and (iv) uninoculated compost. The second approach was to simulate the survival of the pathogen during thermophilic composting. To prepare for heat-adapted cells in compost, above-avirulent E. coli O157:H7 cocktail strains were inoculated (1:10 v/wt) to the finished compost with 40% MC, subjected to heat at 48 °C for 30 min and then inoculated further at a ratio of 1:10 wt/wt in compost samples with 40, 30, and 20% MC. After 24 h incubation at room temperature, the E. coli O157:H7 inoculated compost samples were inoculated (1:10 v/wt) with the 10-strain cocktail of CE cultures to reach ca.10 8 –10 9 CFU g −1 . Four treatments of compost samples were prepared the same as described in the first approach. For both approaches, two independent experiments were performed in triplicate. Experiments were performed as follows: Summer trials (August–September), Fall trials (October–December), and Winter trials (February–March) inside a greenhouse. Sterile cups containing compost samples were arranged in large plastic containers, and a digital hydrothermometer (EU 620-0915; VWR International, Radnor, PA, USA) for temperature and relative humidity was placed inside. Containers had recipients with saturated KCl solution and were closed every evening and opened in the morning. The moisture levels of the samples were adjusted every evening based on weight loss. Adjustment in the morning was not necessary since there was little moisture loss due to the overnight storage in high relative humidity. Therefore, samples were subjected to lower temperatures and high relative humidity overnight and high temperatures and decreased humidity during the day. Treatments were sampled on day 2 then every 4 days and analyzed for moisture content (the moisture levels of the samples were adjusted every day in the greenhouse for all samples) and bacterial enumeration. Briefly, 5 g of inoculated compost was mixed and homogenized with 45 mL of PBS in a sterile stomacher bag. The samples were then serially diluted and plated on TSA-R for the enumeration of E. coli O157:H7 or TSA for the enumeration of CE or the compost microflora. Data obtained from bacterial enumeration were expressed as log CFU per gram dry weight (CFU g/dw), and the detection limit of the plating method was approximately 100 CFU g/dw . E. coli O157:H7 Growth under Laboratory Conditions The selected CE strains ( n = 3) were grown in TSB without glucose to the early stationary phase and then centrifuged and washed twice with 0.8% saline solution. To determine the effectiveness of CE on E. coli O157 inhibition in the compost, about 4 logs of the 3-strain cocktail of CE cultures were inoculated into the above compost containing ca. 6 logs of indigenous microorganisms using the spraying method . The CE-inoculated compost was then adjusted with sterile tap water to different moisture contents (20, 30, and 40%) and then acclimated at room temperature for 24 h. The overnight cultures of three rifampicin-resistant E. coli O157:H7 strains (F06M-0923-21, F07M-020-1, and B6914) grown in TSB-R broth were washed with saline and then inoculated to the CE-inoculated compost at an initial concentration of ca. 2 log CFU/g, and the inoculated samples were then stored at temperatures of 22 or 30 °C. At selected intervals, compost samples were enumerated for E. coli O157:H7 on TSA-R plates. E. coli O157:H7 Growth under Greenhouse Conditions Two experimental approaches were conducted in the greenhouse. Both CE strains and E. coli O157:H7 strains were prepared as described above. The first approach was to simulate pathogen contamination of the finished compost. Briefly, the finished compost with adjusted moisture levels of 20, 30, and 40% were first inoculated (at a ratio of 1:10 v/wt) with the 10-strain cocktail of CE cultures to reach ca. 10 8 –10 9 CFU g −1 . After 24 h, the compost was inoculated with a cocktail of three avirulent E. coli O157:H7 strains (B6914, MD46, and MD47) at ca. 10 5 –10 6 CFU g −1 . Samples consisted of (i) compost inoculated only with E. coli O157:H7 cocktail, (ii) compost inoculated only with CE cocktail, (iii) compost inoculated with both E. coli O157:H7 and CE cocktail, and (iv) uninoculated compost. The second approach was to simulate the survival of the pathogen during thermophilic composting. To prepare for heat-adapted cells in compost, above-avirulent E. coli O157:H7 cocktail strains were inoculated (1:10 v/wt) to the finished compost with 40% MC, subjected to heat at 48 °C for 30 min and then inoculated further at a ratio of 1:10 wt/wt in compost samples with 40, 30, and 20% MC. After 24 h incubation at room temperature, the E. coli O157:H7 inoculated compost samples were inoculated (1:10 v/wt) with the 10-strain cocktail of CE cultures to reach ca.10 8 –10 9 CFU g −1 . Four treatments of compost samples were prepared the same as described in the first approach. For both approaches, two independent experiments were performed in triplicate. Experiments were performed as follows: Summer trials (August–September), Fall trials (October–December), and Winter trials (February–March) inside a greenhouse. Sterile cups containing compost samples were arranged in large plastic containers, and a digital hydrothermometer (EU 620-0915; VWR International, Radnor, PA, USA) for temperature and relative humidity was placed inside. Containers had recipients with saturated KCl solution and were closed every evening and opened in the morning. The moisture levels of the samples were adjusted every evening based on weight loss. Adjustment in the morning was not necessary since there was little moisture loss due to the overnight storage in high relative humidity. Therefore, samples were subjected to lower temperatures and high relative humidity overnight and high temperatures and decreased humidity during the day. Treatments were sampled on day 2 then every 4 days and analyzed for moisture content (the moisture levels of the samples were adjusted every day in the greenhouse for all samples) and bacterial enumeration. Briefly, 5 g of inoculated compost was mixed and homogenized with 45 mL of PBS in a sterile stomacher bag. The samples were then serially diluted and plated on TSA-R for the enumeration of E. coli O157:H7 or TSA for the enumeration of CE or the compost microflora. Data obtained from bacterial enumeration were expressed as log CFU per gram dry weight (CFU g/dw), and the detection limit of the plating method was approximately 100 CFU g/dw . The analysis of pathogen survival data was performed using JMP 11.2.1 (SAS Institute Inc., Atlanta, GA, USA). Analysis of variance (ANOVA), followed by the least significant differences (LSD) test, was carried out to determine whether significant differences ( p < 0.05) existed among different treatments. 3.1. Isolation and Identification of CE Bacteria against E. coli O157:H7 Potential CE microorganisms were isolated from various samples, including dairy manure-based and chicken litter-based finished compost, plant-based compost, and commercial organic fertilizers ( n = 31). The 786 phenotypically different colonies were purified and tested for inhibition activity against E. coli O157:H7 using the spot-on-lawn method followed by broth co-culture method. A total of 22 isolates were considered as potential CE microorganisms. In the presence of individual CE strains, E. coli O157:H7 population reduction ranged from 1.1 to 3.9 logs in TYP broth and 0.9 to 3.7 logs in compost, with Kluyvera strain as the most effective . These CE isolates were identified as Brevibacillus parabrevis , Bacillus amyloliquefaciens , Pseudomonas thermotolerans , Comamonas testosterone , Enterobacter , Citrobacter , Raoultella , Kluyvera , unclassified Comanondaceae , and unclassified Enterobacteriaceae by 16S rRNA method. Three CE isolates ( B. parabrevis , B. amyloliquefaciens, and P. thermotolerans ) were selected for laboratory trials, and ten CE isolates were used for the greenhouse study. 3.2. Effectiveness of CE Treatment on the Growth Reduction in E. coli O157:H7 in Compost under Laboratory Conditions Under laboratory conditions, E. coli O157:H7 grew in the compost with or without CE application under three moisture levels (20, 30, and 40%) and two temperatures (22 and 30 °C) ( and ). As compared with the controls, the CE treatment was effective by reducing the growth of E. coli O157 within 3 days of incubation at 22 and 30 °C by 1.1~2.1, 2.2 ~2.6, and 2.6~3.4 logs in compost with moisture levels of 20, 30, and 40%, respectively. For the compost with 20% moisture, there was more reduction in E. coli O157:H7 at 30 °C than at 22 °C; however, at higher moisture contents (30 and 40%), CE reduced slightly more E. coli O157:H7 population at a lower temperature (22 °C). 3.3. Effectiveness of CE Treatment on the Growth Reduction in E. coli O157:H7 in Compost under Greenhouse Conditions To test the effects of seasonal changes on bacterial inactivation, experiments were performed in the fall, winter, and summer seasons. The average values of temperature in the greenhouse were 24.4, 21.2, and 28.4 °C for fall, winter, and summer trials, respectively, while the average values of relative humidity in the greenhouse were 42.9, 28.0, and 55.4%, respectively. Two different scenarios for pathogen inoculation were tested: a possible recontamination event of the finished compost and the presence of heat-adapted cells that survived the thermophilic phase of composting. For the controls, the season or the compost moisture levels did not influence overall the pathogen survival in the compost samples . In the presence of CE microorganisms, E. coli O157:H7 inoculated in composts with high moisture levels (30 and 40%) declined faster than in the compost with low moisture levels (20% MC) regardless of the inoculation method. Overall, the E. coli O157:H7 population was reduced more for non-adapted cells (0.06 to 2.14 log CFU/g) than the heat-adapted cells (0.02 to 1.54 log CFU/g) by CE treatment for all trials. These results demonstrated the impact of bacterial physiological state and moisture levels on pathogen survival in the compost environment. Seasons influenced the rate of pathogen inactivation. Although E. coli O157:H7 declined in CE-treated samples in all cases as compared with the controls, significant inactivation of non-adapted E. coli O157:H7 by CE microorganisms occurred after only 2 days of storage in the greenhouse in compost samples with higher moisture content (40 and 30%) during the fall and winter trials . In the compost with 20% MC, a significant reduction in E. coli O157:H7 by CE microorganisms took 16 days of storage for the same conditions. On the other hand, the heat-adapted cells showed resistance to inhibitory action by CE since significant differences between treatments and controls were present after 12 days for compost with 40 and 30% MC and 16 days of storage for compost with 20% MC in the fall trial. A similar outcome resulted from the winter trial: heat-adapted cells with CE treatments showed differences compared to controls in compost with 40% moisture content at day 8, 30% moisture content at day 12, and 20% moisture content at day 16 of greenhouse storage. As for the summer trial, there was no significant difference between the treatment and the controls in the first 4 days of greenhouse incubation for both heat-adapted and non-heat-adapted cells. E. coli O157:H7 population in most of the treatments dropped to significant levels after 8 days of storage in the greenhouse. The temperature in the greenhouse varied greatly between the three tested seasons (in the summer trial, occasional temperatures over 50 °C were recorded in the sample, whereas in the fall and winter, the temperature did not exceed 38 °C). Also, some of the CE strains did not grow at elevated temperatures (42 °C) and therefore may be less active when exposed to elevated temperatures. Potential CE microorganisms were isolated from various samples, including dairy manure-based and chicken litter-based finished compost, plant-based compost, and commercial organic fertilizers ( n = 31). The 786 phenotypically different colonies were purified and tested for inhibition activity against E. coli O157:H7 using the spot-on-lawn method followed by broth co-culture method. A total of 22 isolates were considered as potential CE microorganisms. In the presence of individual CE strains, E. coli O157:H7 population reduction ranged from 1.1 to 3.9 logs in TYP broth and 0.9 to 3.7 logs in compost, with Kluyvera strain as the most effective . These CE isolates were identified as Brevibacillus parabrevis , Bacillus amyloliquefaciens , Pseudomonas thermotolerans , Comamonas testosterone , Enterobacter , Citrobacter , Raoultella , Kluyvera , unclassified Comanondaceae , and unclassified Enterobacteriaceae by 16S rRNA method. Three CE isolates ( B. parabrevis , B. amyloliquefaciens, and P. thermotolerans ) were selected for laboratory trials, and ten CE isolates were used for the greenhouse study. Under laboratory conditions, E. coli O157:H7 grew in the compost with or without CE application under three moisture levels (20, 30, and 40%) and two temperatures (22 and 30 °C) ( and ). As compared with the controls, the CE treatment was effective by reducing the growth of E. coli O157 within 3 days of incubation at 22 and 30 °C by 1.1~2.1, 2.2 ~2.6, and 2.6~3.4 logs in compost with moisture levels of 20, 30, and 40%, respectively. For the compost with 20% moisture, there was more reduction in E. coli O157:H7 at 30 °C than at 22 °C; however, at higher moisture contents (30 and 40%), CE reduced slightly more E. coli O157:H7 population at a lower temperature (22 °C). To test the effects of seasonal changes on bacterial inactivation, experiments were performed in the fall, winter, and summer seasons. The average values of temperature in the greenhouse were 24.4, 21.2, and 28.4 °C for fall, winter, and summer trials, respectively, while the average values of relative humidity in the greenhouse were 42.9, 28.0, and 55.4%, respectively. Two different scenarios for pathogen inoculation were tested: a possible recontamination event of the finished compost and the presence of heat-adapted cells that survived the thermophilic phase of composting. For the controls, the season or the compost moisture levels did not influence overall the pathogen survival in the compost samples . In the presence of CE microorganisms, E. coli O157:H7 inoculated in composts with high moisture levels (30 and 40%) declined faster than in the compost with low moisture levels (20% MC) regardless of the inoculation method. Overall, the E. coli O157:H7 population was reduced more for non-adapted cells (0.06 to 2.14 log CFU/g) than the heat-adapted cells (0.02 to 1.54 log CFU/g) by CE treatment for all trials. These results demonstrated the impact of bacterial physiological state and moisture levels on pathogen survival in the compost environment. Seasons influenced the rate of pathogen inactivation. Although E. coli O157:H7 declined in CE-treated samples in all cases as compared with the controls, significant inactivation of non-adapted E. coli O157:H7 by CE microorganisms occurred after only 2 days of storage in the greenhouse in compost samples with higher moisture content (40 and 30%) during the fall and winter trials . In the compost with 20% MC, a significant reduction in E. coli O157:H7 by CE microorganisms took 16 days of storage for the same conditions. On the other hand, the heat-adapted cells showed resistance to inhibitory action by CE since significant differences between treatments and controls were present after 12 days for compost with 40 and 30% MC and 16 days of storage for compost with 20% MC in the fall trial. A similar outcome resulted from the winter trial: heat-adapted cells with CE treatments showed differences compared to controls in compost with 40% moisture content at day 8, 30% moisture content at day 12, and 20% moisture content at day 16 of greenhouse storage. As for the summer trial, there was no significant difference between the treatment and the controls in the first 4 days of greenhouse incubation for both heat-adapted and non-heat-adapted cells. E. coli O157:H7 population in most of the treatments dropped to significant levels after 8 days of storage in the greenhouse. The temperature in the greenhouse varied greatly between the three tested seasons (in the summer trial, occasional temperatures over 50 °C were recorded in the sample, whereas in the fall and winter, the temperature did not exceed 38 °C). Also, some of the CE strains did not grow at elevated temperatures (42 °C) and therefore may be less active when exposed to elevated temperatures. Composting is an environmentally friendly process for converting livestock and agricultural wastes into organic fertilizer and soil amendment. During composting, the high temperatures achieved in the thermophilic phase are critical for pathogen inactivation. However, despite high temperatures, extended survival of pathogens in compost has been reported . This study evaluated the effectiveness of selected competitive exclusion microorganisms (CE) isolated from composts for inactivating E. coli O157:H7 in dairy compost with different moisture levels under both laboratory and greenhouse conditions. According to the literature, lactic acid bacteria, Enterococcus , Pseudomonas , Paenibacillus , Streptomyces , Bacillus , and some commercially produced bacterial cultures have been widely used as CE microorganisms in controlling foodborne pathogens . Some of the CE species identified in this study were previously reported as possessing inhibitory activities against both human and plant pathogens. For example, Pseudomonas aeruginosa ISO1 and ISO2 isolated from the compost inhibited plant pathogens Pythium aphanidermatum and Fusarium solani . Wang and Jiang reported the inhibition of 10 fresh-produce outbreak strains of Listeria monocytogenes up to 2.2 logs by 17 CE strains isolated from compost, including Bacillus spp., Brevibacillus spp., Kocuria spp., Paenibacillus spp., and Planococcus spp. Additionally, Kluyvera , a soil bacterium, exhibited a significant reduction in E. coli O157:H7 in both liquid broth and compost . A previous study reported that Kluyvera ascorbate SUD165 could protect canola, Indian mustard, and tomato seedlings against the inhibitory effects of high concentrations of heavy metals such as nickel, lead, and zinc by providing the plants with sufficient ions . Iron is an essential micronutrient for most pathogenic bacteria, including E. coli O157:H7, in bacterial growth and metabolism, playing a vital role in various cellular processes . The robust iron sequestration capability of Kluyvera may account for the reduction in E. coli O157:H7 observed in this study. However, experimental confirmation is necessary to validate this hypothesis. Compost is rich in the nutrients, and studies have shown the growth of foodborne pathogens in compost under favorable conditions . Data from showed CE reduced slightly more E. coli O157:H7 population at lower temperatures (22 °C) in compost with higher moisture contents (30 and 40%). Being a mesophile, E. coli O157:H7 is expected to grow faster in higher moisture compost at 30 °C than at 22 °C. In contrast, the CE strains isolated from the finished compost grow better at room temperature. Due to the high growth rate of CE microorganisms in compost with high MC, it’s unsurprising that more E. coli O157:H7 was inactivated at room temperature than at 30 °C. Even though animal manure-based compost is highly recommended for use as the organic fertilizer or biological soil amendment in agricultural production, inadequately treated or handled compost has been implicated in a few produce-related outbreaks . It is well-documented that foodborne pathogens, such as Salmonella spp., can regrow in composted biosolids and stored biosolids . However, only a few studies have examined the growth potential of pathogens in animal manure-based compost. Kim and Jiang reported that E. coli O157:H7, Salmonella spp., and Listeria monocytogenes were able to grow ca. 2–4 logs in 3 days in compost in a greenhouse setting under different seasons when the population of indigenous microorganisms was low (<3 logs CFU/g) and moisture content at least 30–40%. To evaluate the impact on pathogen growth in compost, our CE treatment trials investigated several factors, such as temperature, compost moisture, and physiological stages of E. coli O157:H7, which were considered key factors influencing the fate of enteric bacterial pathogens in the environment . In this study, the maximal reduction in E. coli O157:H7 was 2 logs under greenhouse conditions, which is similar to our previous study on inhibiting L. monocytogenes in compost using CE microorganisms . Up to 2.2 log inhibition of L. monocytogenes in both compost extract and compost samples by compost-adapted CE microorganisms was reported, and the inhibition was affected by compost types, nutrient levels, and incubation temperatures. These results suggest the effectiveness of applying CE microorganisms to control foodborne pathogens in the finished compost. Due to the temperature gradients formed across the composting heaps or piles, some populations of bacterial pathogens may be heat-shocked and survive the composting process by adapting to sublethal temperatures. Singh et al. reported that heat-shocked E. coli O157:H7, Salmonella spp., and L. monocytogenes extended survival at lethal temperatures (50–60 °C) following heat shock at 47.5 °C for 1 h. Besides developing heat resistance, heat-shock response can also induce cross-resistance to other stressors, including competition from other microorganisms . In this study, it appears that the heat-shocked E. coli O157:H7 became more resistant to CE treatment than not heat-shocked pathogen in compost. A possible explanation is that the changes induced by newly expressed heat-shock genes could influence interactions of heat-shocked E. coli O157:H7 with other microorganisms . These interactions may affect adhesion, biofilm formation, nutrient utilization, or susceptibility to antimicrobial compounds produced by competing microorganisms. Further study is needed to understand this cross-resistance mechanism. As stated by Mead , factors unique to the field conditions affecting the efficacy of CE treatment should be evaluated. In this study, the reduction in E. coli O157:H7 in compost with similar moisture levels and temperatures by CE treatment was noticeably less under greenhouse conditions compared to laboratory conditions. Unlike the controlled environment for laboratory-based studies, pathogen persistence in the greenhouse environment is exposed to various stresses, such as fluctuations in temperature and relative humidity, UV exposure, unsteady airflow, and others. Based on our research findings, a cocktail of CE microorganisms should be applied a few days prior to the use of the finished compost, preferably in the colder seasons. The advantage of treating the finished compost with the compost-isolated CE microorganisms is that (i) these CE microorganisms are adapted to the compost environment, thus ensuring their survival; (ii) this biological control method ensures microbiological safety to the compost; and (iii) avoid major changes in compost physicochemical and microbiological properties. Our results demonstrated that up to 99% population of E. coli O157:H7 cells, resulting from cross-contamination, can be effectively reduced within 2 days during colder seasons (winter and fall) by CE microorganisms (such as Brevibacillus , Bacillus , Pseudomonas , Kluyvera and so on) in the finished dairy compost with at least 30% moisture. For those heat-adapted E. coli O157:H7 cells surviving the thermophilic composting process, the inhibitory effects from CE became significant only after 8~12 days, suggesting the cross-resistance of the heat-adapted E. coli O157:H7 population. Both higher moisture content in the compost and cold seasons enhanced the activity of CE microorganisms against E. coli O157:H7. These results indicate that some indigenous compost microflora can be an efficient tool to control foodborne pathogens in finished compost and reduce the potential for soil and crop contamination. However, factors such as the physiological state of the bacteria, the environmental conditions, and compost moisture levels should be considered. Furthermore, those CE strains should be further characterized to ensure the safety of applying these biological control agents. Based on the results of this study, to prevent pathogen growth in finished compost due to cross-contamination, a cocktail of strains of competitive exclusion microorganisms can be applied a few days prior to the use of the finished compost, preferably in the colder seasons.
Comparison of long-term ankle joint function after one-stage and staged microsurgical repair of open achilles tendon defects
4b876f9c-223d-4890-9544-b0ae29e5dedc
11804006
Surgical Procedures, Operative[mh]
The heel region is an important functional area for wearing shoes and bearing weight. The Achilles tendon is the thickest tendon in the human body and can withstand a load of 2 ~ 3 times body weight during walking and even withstand loads up to 12.5 times the body weight when running and jumping intensely.Once an Achilles tendon defect is accompanied by soft tissue loss in the heel region, microsurgical techniques are usually required for repair. Since Taylor et al. reported the use of vascularised tendon composite flaps for the repair of tendon‒skin composite defects in 1979, there have been increasing reports on the repair and reconstruction of Achilles tendon defects using microsurgical procedures. However, few studies have compared one-stage and staged microsurgical reconstruction and repair of open Achilles tendon defects. Therefore, this study examined 147 patients with open Achilles tendon defects treated from January 2007 to September 2023 to compare and analyse the two surgical methods based on relevant medical records and ankle function follow-up data. Furthermore, differences in long-term follow-up ankle joint function in the treatment of open Achilles tendon defects were compared between one-stage and staged microsurgical procedures. General Clinical Characteristics of the Patients A retrospective analysis was conducted on 147 patients with open Achilles tendon defects treated at the 904th Hospital of the Joint Logistics Support Force of the Chinese People's Liberation Army and the Fifth People's Hospital of Wuxi from January 2007 to September 2023. General clinical data, including age, sex, height, body weight, body mass index, injury site (left or right), injury type (Myerson classification), length of Achilles tendon defect, area of skin and soft tissue defect in the heel region, Achilles tendon thickness, surgical time, suture removal time, cause of injury, and associated infection, were collected. The inclusion criteria were as follows: (1) open Achilles tendon defect: presence of skin and Achilles tendon or calcaneus defects; (2) new Achilles tendon rupture, with the injury course not exceeding 4 weeks; (3) no severe cardiovascular or cerebrovascular diseases, mental disorders, or renal dysfunction; and (4) no autoimmune diseases or blood disorders. The exclusion criteria were as follows: (1) closed Achilles tendon rupture and (2) chronic Achilles tendon defect, with the course of the condition exceeding 4 weeks. This study was approved by the Ethics Committee of the 904th Hospital of the Joint Logistics Support Force of the Chinese People's Liberation Army, and informed consent was obtained from patients (2023–10-9). Grouping Based on the Surgical Method Patients were divided into a one-stage repair reconstruction group (n = 81) and a staged repair reconstruction group (n = 66) on the basis of whether microsurgical techniques were applied for one-stage repair of open Achilles tendon defects. In the one-stage reconstruction group, 43 patients underwent vascular anastomosed fascia lata free anterolateral thigh perforator flap transplantation, and 38 patients underwent descending genicular artery free flap transplantation with the adductor magnus tendon. In the staged reconstruction group, the first stage involved the use of a gastrocnemius neurocutaneous perforator flap to repair soft tissue defects in the heel area, followed by flexor hallucis longus tendon transfer in 31 patients and peroneus longus muscle tendon transfer with the lateral calcaneal artery in 35 patients in the second stage. Surgical Methods One-stage Surgical Repair Group Complete debridement was performed, the length of the Achilles tendon defect was marked and measured, and the proximal ends of the posterior tibial artery, vein, great saphenous vein, and saphenous nerve in the recipient area were dissected to prepare for transplantation. 1. Anterolateral thigh perforator free flap transplantation (Figs. , , , , and ) . Preoperative Doppler ultrasound was used to detect and mark the course of the lateral circumflex femoral artery perforator. The corresponding flap on the thigh was designed and harvested on the basis of the size and shape of the wound in the recipient area. The area under the fascia lata between the rectus femoris and vastus lateralis was dissected to identify the descending branch of the lateral circumflex femoral artery. Subsequently, 1 ~ 2 musculocutaneous perforators were carefully dissected and preserved, the fascial tissue around the blood vessels was retained, and the lateral femoral cutaneous nerve within the flap was preserved. According to the length of the Achilles tendon defect, a section of the fascia lata approximately 5 cm wide was harvested along the distal course of the flap fascia lata and stored until use. End-to-end anastomosis was performed between the posterior tibial artery and vein and the lateral femoral circumflex artery, the lateral femoral cutaneous nerve was anastomosed with the saphenous nerve, and the fascia lata was sutured with the Achilles tendon stump. A drainage tube was placed subcutaneously, excessive skin suturing was avoided, and the flap pedicle was not compressed. 2. Anastomosis of the descending genicular artery free flap transplantation with the adductor magnus tendon . The flap was designed according to the condition of the recipient area, Achilles tendon, and skin. Using the great saphenous vein as the longitudinal axis, a flap was designed below the level of the medial malleolus of the femur. A longitudinal skin incision was made approximately 10 cm above the adductor tubercle, and the sartorius and the medial vastus muscles were retracted to both sides to open the adductor canal and expose the femoral artery. The descending genicular artery was freed, the articular branch was carefully protected, the branch to the vastus medialis was ligated, and the adductor magnus tendon was freed. Simultaneously, the femoral medial bone flap, carrying the adductor tubercle, was cut to expose the great saphenous vein and free the saphenous artery and nerve. The saphenous vessels, nerves, and great saphenous veins could be seen entering the flap. The flap was cut according to the design line, and the descending genicular vessels or saphenous arteries were ligated at a high position. The donor site wound was covered with a full-thickness skin graft from the abdomen, and the proximal end of the adductor muscle tendon was anastomosed with the ruptured end of the Achilles tendon. Vascular reconstruction of the transplanted tissue flap was conducted as follows: When the descending genicular artery was type I or II, the descending genicular artery and its accompanying vein were anastomosed with the posterior tibial artery and its accompanying vein. When the descending genicular artery was type III or IV, the distal end of the descending genicular artery was anastomosed with the severed end of the posterior tibial artery, and the saphenous artery was anastomosed with the proximal end of the severed posterior tibial artery. The accompanying veins of the descending genicular artery and saphenous artery were anastomosed with the accompanying veins of the posterior tibial artery and the great saphenous vein, respectively. Nerve reconstruction involved anastomosis of the saphenous nerve and the sural nerve. Staged Surgical Repair Group In stage one, after thorough debridement, a gastrocnemius neurotrophic vascular pedicle flap was used to repair the soft tissue defect in the heel area. In stage two, a longitudinal incision of approximately 6 cm was made on the medial side of the Achilles tendon defect, with layer-by-layer dissection while preserving the peritendinous tissue to expose and neatly trim the Achilles tendon stumps. 1. Flexor hallucis longus tendon transfer A vertical incision was made from the talonavicular joint to the medial midsection of the first metatarsal, the abductor hallucis was retracted laterally, and the flexor hallucis longus tendon and flexor digitorum longus tendon at the Henry knot were isolated, with care taken to protect the plantar blood vessels and nerves. The flexor hallucis longus tendon and flexor digitorum longus tendon were sutured side-by-side at the distal fibrous crossover, and the flexor hallucis longus tendon was cut at the proximal fibrous crossover. Within the medial incision of the Achilles tendon, the deep posterior compartment of the calf was cut open to locate and extract the flexor hallucis longus tendon. A hole was drilled 2 cm downwards from the medial side of the apex at the posterior superior edge of the calcaneal tuberosity, and a 4.5 mm diameter drill bit was used to make an oblique hole from the distal medial end to the proximal lateral end of the calcaneus. The flexor hallucis longus tendon was drawn from the medial to the lateral side, the ankle joint was maintained in 30° plantar flexion, and the flexor tendon was sutured to the periosteum around the bone hole for fixation. The distal end of the flexor hallucis longus tendon was sutured to the severed end of the Achilles tendon, and the proximal end of the plantar tendon was cut to be woven and reinforced with the proximal end of the Achilles tendon. 2. Peroneus longus muscle tendon transfer to the lateral calcaneal artery. The operation was performed under epidural anaesthesia. After the lesion was completely removed, the range of the Achilles tendon and skin defects was measured. The vascular pedicle of the lateral calcaneal artery was carefully dissociated, the peroneal long tendon at the beginning of the cuboid head and tendon was disconnected, the vascularised tendon was transplanted into the defect area in a "U" shape, and then the tendon was sutured and fixed. Statistical analysis Statistical analysis was performed using SPSS 23.0 software. The measurement data are expressed as the means ± standard deviations, and the count data are expressed as percentages. For normally distributed data, a t test was used to analyse intergroup differences; for measurement data that did not conform to a normal distribution, the rank-sum test was used; for count data, the chi-square test was applied. Each variable was subjected to univariate binary logistic regression analysis. Indicators that were significant in the univariate binary logistic regression analysis were identified as having multicollinearity (variance inflation factor < 10). After multiple indicators were excluded, variables of significance were further included in the multifactorial logistic regression analysis to calculate the OR value and 95% CI . Correlation analysis was conducted for each variable: the length of the Achilles tendon defect, the area of the skin and soft tissue defects in the heel region, the Meyerson classification, and the postoperative follow-up ankle joint function score (ATRS and AOFAS) of the affected limb. Two-tailed or two-sided tests were used, and significant differences were defined as those for which P < 0.05. A retrospective analysis was conducted on 147 patients with open Achilles tendon defects treated at the 904th Hospital of the Joint Logistics Support Force of the Chinese People's Liberation Army and the Fifth People's Hospital of Wuxi from January 2007 to September 2023. General clinical data, including age, sex, height, body weight, body mass index, injury site (left or right), injury type (Myerson classification), length of Achilles tendon defect, area of skin and soft tissue defect in the heel region, Achilles tendon thickness, surgical time, suture removal time, cause of injury, and associated infection, were collected. The inclusion criteria were as follows: (1) open Achilles tendon defect: presence of skin and Achilles tendon or calcaneus defects; (2) new Achilles tendon rupture, with the injury course not exceeding 4 weeks; (3) no severe cardiovascular or cerebrovascular diseases, mental disorders, or renal dysfunction; and (4) no autoimmune diseases or blood disorders. The exclusion criteria were as follows: (1) closed Achilles tendon rupture and (2) chronic Achilles tendon defect, with the course of the condition exceeding 4 weeks. This study was approved by the Ethics Committee of the 904th Hospital of the Joint Logistics Support Force of the Chinese People's Liberation Army, and informed consent was obtained from patients (2023–10-9). Patients were divided into a one-stage repair reconstruction group (n = 81) and a staged repair reconstruction group (n = 66) on the basis of whether microsurgical techniques were applied for one-stage repair of open Achilles tendon defects. In the one-stage reconstruction group, 43 patients underwent vascular anastomosed fascia lata free anterolateral thigh perforator flap transplantation, and 38 patients underwent descending genicular artery free flap transplantation with the adductor magnus tendon. In the staged reconstruction group, the first stage involved the use of a gastrocnemius neurocutaneous perforator flap to repair soft tissue defects in the heel area, followed by flexor hallucis longus tendon transfer in 31 patients and peroneus longus muscle tendon transfer with the lateral calcaneal artery in 35 patients in the second stage. One-stage Surgical Repair Group Complete debridement was performed, the length of the Achilles tendon defect was marked and measured, and the proximal ends of the posterior tibial artery, vein, great saphenous vein, and saphenous nerve in the recipient area were dissected to prepare for transplantation. 1. Anterolateral thigh perforator free flap transplantation (Figs. , , , , and ) . Preoperative Doppler ultrasound was used to detect and mark the course of the lateral circumflex femoral artery perforator. The corresponding flap on the thigh was designed and harvested on the basis of the size and shape of the wound in the recipient area. The area under the fascia lata between the rectus femoris and vastus lateralis was dissected to identify the descending branch of the lateral circumflex femoral artery. Subsequently, 1 ~ 2 musculocutaneous perforators were carefully dissected and preserved, the fascial tissue around the blood vessels was retained, and the lateral femoral cutaneous nerve within the flap was preserved. According to the length of the Achilles tendon defect, a section of the fascia lata approximately 5 cm wide was harvested along the distal course of the flap fascia lata and stored until use. End-to-end anastomosis was performed between the posterior tibial artery and vein and the lateral femoral circumflex artery, the lateral femoral cutaneous nerve was anastomosed with the saphenous nerve, and the fascia lata was sutured with the Achilles tendon stump. A drainage tube was placed subcutaneously, excessive skin suturing was avoided, and the flap pedicle was not compressed. 2. Anastomosis of the descending genicular artery free flap transplantation with the adductor magnus tendon . The flap was designed according to the condition of the recipient area, Achilles tendon, and skin. Using the great saphenous vein as the longitudinal axis, a flap was designed below the level of the medial malleolus of the femur. A longitudinal skin incision was made approximately 10 cm above the adductor tubercle, and the sartorius and the medial vastus muscles were retracted to both sides to open the adductor canal and expose the femoral artery. The descending genicular artery was freed, the articular branch was carefully protected, the branch to the vastus medialis was ligated, and the adductor magnus tendon was freed. Simultaneously, the femoral medial bone flap, carrying the adductor tubercle, was cut to expose the great saphenous vein and free the saphenous artery and nerve. The saphenous vessels, nerves, and great saphenous veins could be seen entering the flap. The flap was cut according to the design line, and the descending genicular vessels or saphenous arteries were ligated at a high position. The donor site wound was covered with a full-thickness skin graft from the abdomen, and the proximal end of the adductor muscle tendon was anastomosed with the ruptured end of the Achilles tendon. Vascular reconstruction of the transplanted tissue flap was conducted as follows: When the descending genicular artery was type I or II, the descending genicular artery and its accompanying vein were anastomosed with the posterior tibial artery and its accompanying vein. When the descending genicular artery was type III or IV, the distal end of the descending genicular artery was anastomosed with the severed end of the posterior tibial artery, and the saphenous artery was anastomosed with the proximal end of the severed posterior tibial artery. The accompanying veins of the descending genicular artery and saphenous artery were anastomosed with the accompanying veins of the posterior tibial artery and the great saphenous vein, respectively. Nerve reconstruction involved anastomosis of the saphenous nerve and the sural nerve. Staged Surgical Repair Group In stage one, after thorough debridement, a gastrocnemius neurotrophic vascular pedicle flap was used to repair the soft tissue defect in the heel area. In stage two, a longitudinal incision of approximately 6 cm was made on the medial side of the Achilles tendon defect, with layer-by-layer dissection while preserving the peritendinous tissue to expose and neatly trim the Achilles tendon stumps. 1. Flexor hallucis longus tendon transfer A vertical incision was made from the talonavicular joint to the medial midsection of the first metatarsal, the abductor hallucis was retracted laterally, and the flexor hallucis longus tendon and flexor digitorum longus tendon at the Henry knot were isolated, with care taken to protect the plantar blood vessels and nerves. The flexor hallucis longus tendon and flexor digitorum longus tendon were sutured side-by-side at the distal fibrous crossover, and the flexor hallucis longus tendon was cut at the proximal fibrous crossover. Within the medial incision of the Achilles tendon, the deep posterior compartment of the calf was cut open to locate and extract the flexor hallucis longus tendon. A hole was drilled 2 cm downwards from the medial side of the apex at the posterior superior edge of the calcaneal tuberosity, and a 4.5 mm diameter drill bit was used to make an oblique hole from the distal medial end to the proximal lateral end of the calcaneus. The flexor hallucis longus tendon was drawn from the medial to the lateral side, the ankle joint was maintained in 30° plantar flexion, and the flexor tendon was sutured to the periosteum around the bone hole for fixation. The distal end of the flexor hallucis longus tendon was sutured to the severed end of the Achilles tendon, and the proximal end of the plantar tendon was cut to be woven and reinforced with the proximal end of the Achilles tendon. 2. Peroneus longus muscle tendon transfer to the lateral calcaneal artery. The operation was performed under epidural anaesthesia. After the lesion was completely removed, the range of the Achilles tendon and skin defects was measured. The vascular pedicle of the lateral calcaneal artery was carefully dissociated, the peroneal long tendon at the beginning of the cuboid head and tendon was disconnected, the vascularised tendon was transplanted into the defect area in a "U" shape, and then the tendon was sutured and fixed. Complete debridement was performed, the length of the Achilles tendon defect was marked and measured, and the proximal ends of the posterior tibial artery, vein, great saphenous vein, and saphenous nerve in the recipient area were dissected to prepare for transplantation. 1. Anterolateral thigh perforator free flap transplantation (Figs. , , , , and ) . Preoperative Doppler ultrasound was used to detect and mark the course of the lateral circumflex femoral artery perforator. The corresponding flap on the thigh was designed and harvested on the basis of the size and shape of the wound in the recipient area. The area under the fascia lata between the rectus femoris and vastus lateralis was dissected to identify the descending branch of the lateral circumflex femoral artery. Subsequently, 1 ~ 2 musculocutaneous perforators were carefully dissected and preserved, the fascial tissue around the blood vessels was retained, and the lateral femoral cutaneous nerve within the flap was preserved. According to the length of the Achilles tendon defect, a section of the fascia lata approximately 5 cm wide was harvested along the distal course of the flap fascia lata and stored until use. End-to-end anastomosis was performed between the posterior tibial artery and vein and the lateral femoral circumflex artery, the lateral femoral cutaneous nerve was anastomosed with the saphenous nerve, and the fascia lata was sutured with the Achilles tendon stump. A drainage tube was placed subcutaneously, excessive skin suturing was avoided, and the flap pedicle was not compressed. 2. Anastomosis of the descending genicular artery free flap transplantation with the adductor magnus tendon . The flap was designed according to the condition of the recipient area, Achilles tendon, and skin. Using the great saphenous vein as the longitudinal axis, a flap was designed below the level of the medial malleolus of the femur. A longitudinal skin incision was made approximately 10 cm above the adductor tubercle, and the sartorius and the medial vastus muscles were retracted to both sides to open the adductor canal and expose the femoral artery. The descending genicular artery was freed, the articular branch was carefully protected, the branch to the vastus medialis was ligated, and the adductor magnus tendon was freed. Simultaneously, the femoral medial bone flap, carrying the adductor tubercle, was cut to expose the great saphenous vein and free the saphenous artery and nerve. The saphenous vessels, nerves, and great saphenous veins could be seen entering the flap. The flap was cut according to the design line, and the descending genicular vessels or saphenous arteries were ligated at a high position. The donor site wound was covered with a full-thickness skin graft from the abdomen, and the proximal end of the adductor muscle tendon was anastomosed with the ruptured end of the Achilles tendon. Vascular reconstruction of the transplanted tissue flap was conducted as follows: When the descending genicular artery was type I or II, the descending genicular artery and its accompanying vein were anastomosed with the posterior tibial artery and its accompanying vein. When the descending genicular artery was type III or IV, the distal end of the descending genicular artery was anastomosed with the severed end of the posterior tibial artery, and the saphenous artery was anastomosed with the proximal end of the severed posterior tibial artery. The accompanying veins of the descending genicular artery and saphenous artery were anastomosed with the accompanying veins of the posterior tibial artery and the great saphenous vein, respectively. Nerve reconstruction involved anastomosis of the saphenous nerve and the sural nerve. In stage one, after thorough debridement, a gastrocnemius neurotrophic vascular pedicle flap was used to repair the soft tissue defect in the heel area. In stage two, a longitudinal incision of approximately 6 cm was made on the medial side of the Achilles tendon defect, with layer-by-layer dissection while preserving the peritendinous tissue to expose and neatly trim the Achilles tendon stumps. 1. Flexor hallucis longus tendon transfer A vertical incision was made from the talonavicular joint to the medial midsection of the first metatarsal, the abductor hallucis was retracted laterally, and the flexor hallucis longus tendon and flexor digitorum longus tendon at the Henry knot were isolated, with care taken to protect the plantar blood vessels and nerves. The flexor hallucis longus tendon and flexor digitorum longus tendon were sutured side-by-side at the distal fibrous crossover, and the flexor hallucis longus tendon was cut at the proximal fibrous crossover. Within the medial incision of the Achilles tendon, the deep posterior compartment of the calf was cut open to locate and extract the flexor hallucis longus tendon. A hole was drilled 2 cm downwards from the medial side of the apex at the posterior superior edge of the calcaneal tuberosity, and a 4.5 mm diameter drill bit was used to make an oblique hole from the distal medial end to the proximal lateral end of the calcaneus. The flexor hallucis longus tendon was drawn from the medial to the lateral side, the ankle joint was maintained in 30° plantar flexion, and the flexor tendon was sutured to the periosteum around the bone hole for fixation. The distal end of the flexor hallucis longus tendon was sutured to the severed end of the Achilles tendon, and the proximal end of the plantar tendon was cut to be woven and reinforced with the proximal end of the Achilles tendon. 2. Peroneus longus muscle tendon transfer to the lateral calcaneal artery. The operation was performed under epidural anaesthesia. After the lesion was completely removed, the range of the Achilles tendon and skin defects was measured. The vascular pedicle of the lateral calcaneal artery was carefully dissociated, the peroneal long tendon at the beginning of the cuboid head and tendon was disconnected, the vascularised tendon was transplanted into the defect area in a "U" shape, and then the tendon was sutured and fixed. Statistical analysis was performed using SPSS 23.0 software. The measurement data are expressed as the means ± standard deviations, and the count data are expressed as percentages. For normally distributed data, a t test was used to analyse intergroup differences; for measurement data that did not conform to a normal distribution, the rank-sum test was used; for count data, the chi-square test was applied. Each variable was subjected to univariate binary logistic regression analysis. Indicators that were significant in the univariate binary logistic regression analysis were identified as having multicollinearity (variance inflation factor < 10). After multiple indicators were excluded, variables of significance were further included in the multifactorial logistic regression analysis to calculate the OR value and 95% CI . Correlation analysis was conducted for each variable: the length of the Achilles tendon defect, the area of the skin and soft tissue defects in the heel region, the Meyerson classification, and the postoperative follow-up ankle joint function score (ATRS and AOFAS) of the affected limb. Two-tailed or two-sided tests were used, and significant differences were defined as those for which P < 0.05. Comparison of baseline data between the two patient groups A total of 147 patients were included in this study, of whom 117 were male and 30 were female, with an average age of 43 (36, 56) years. The causes of injury included 73 traffic accidents, including 52 wheel spoke injuries and 21 other traffic accidents; 36 crush injuries; 38 mechanical injuries; all injuries were open injuries. The area of the limb soft tissue defect was classified according to the Meyerson classification: Type I in 34 patients, Type II in 77 patients, and Type III in 36 patients. Open Achilles tendon defect wounds with infections were observed in 42 patients, whereas 105 patients had simple defects. All 147 patients were followed up for a period of 12– 36 (25.6 ± 1.8) months. All surgical incisions healed in the first stage without infection. Three months after surgery, colour Doppler ultrasound revealed good continuity of the Achilles tendon, with no complications, such as rerupture. Ten weeks after surgery, ankle dorsiflexion was 0°, and patients could walk in regular shoes without discomfort. At the final follow-up, the flap appearance was satisfactory, with good softness and elasticity and no scar contracture or significant pigmentation. The flap could withstand some degree of friction during walking and daily activities and had regained a protective sensation. All patients could perform single-leg heel raises and were satisfied with the surgical results. No complications, such as rerupture, delayed incision healing, incision nonhealing, infection, deep vein thrombosis, sensory reduction, Achilles tendon pain, Achilles tendon contracture, Achilles tendon laxity, or muscle hernia, were observed, and there were no abnormalities in the movement of the donor limb. The patients were divided into a one-stage reconstruction group (81 patients) and a staged reconstruction group (66 patients). Baseline data, such as age, sex, height, body weight, calculated body mass index, injury site (left or right), injury type (Myerson classification), length of Achilles tendon defect, area of skin and soft tissue defect, Achilles tendon thickness, surgery time, suture removal time, cause of injury, and presence of infection, were compared between the two groups (see Table for details). The test results revealed that, except for differences in the length of the Achilles tendon defect and the areas of the skin and soft tissue defects ( t L = 4.749, P L < 0.001; t S = 5.170, P S < 0.001), other baseline data did not significantly differ between the groups ( P > 0.05). Comparative Analysis of Long-term Ankle Joint Function Between the Two Groups of Patients A comparative analysis of the long-term postoperative follow-up ankle joint function scores of the two groups of patients (Table ) revealed that the postoperative follow-up ankle joint function scores of the one-stage reconstruction group were significantly better than those of the staged reconstruction group ( P AOFAS < 0.001; P ATRS < 0.001). Further comparative analysis of the two microsurgical methods within the one-stage reconstruction repair group (Table ) revealed no significant difference in the overall postoperative follow-up ankle joint function scores ( P AOFAS = 0.792; P ATRS < 0.001). However, in terms of daily activities, walking ability on uneven surfaces, ability to ascend stairs quickly, abnormal gait, plantar flexion and dorsiflexion, inversion and eversion, the one-stage technique of free flap transplantation of the descending genicular artery with the adductor magnus tendon was superior to the vascular anastomosed fascia lata free anterolateral thigh perforator flap transplantation. Additionally, the two surgical methods were compared in the staged surgery group. The results (Table ) of long-term follow-up of ankle joint function in patients revealed that the one-stage gastrocnemius neurocutaneous flap combined with a two-stage peroneus longus muscle tendon transfer with the lateral calcaneal artery was superior to the one-stage gastrocnemius neurocutaneous flap combined with a two-stage flexor hallucis longus tendon transfer ( P AOFAS < 0.001; P ATRS < 0.001). Correlation analysis of the Achilles tendon defect length, area of skin and soft tissue defects in the heel region, degree of wound infection, Meyerson classification, and postoperative follow-up ankle joint function score (ATRS score, AOFAS score) The correlations among the degree of preoperative wound infection, classification of heel area injury, size of the defect area in the heel region, length of the Achilles tendon defect, and a series of clinical characteristics with postoperative follow-up data related to ankle joint function were evaluated. The results (Figs. , ) revealed that the classification of heel area injury ( P < 0.001), the size of the defect area in the heel region ( P AOFAS < 0.001, R AOFAS = -0.397; P ATRS < 0.001, R ATRS = -0.436), and the length of the Achilles tendon defect ( P AOFAS < 0.001, R AOFAS = -0.429; P ATRS < 0.001, R ATRS = -0.280) were correlated with postoperative ankle joint function in the affected limb, whereas preoperative wound infection was not correlated with postoperative ankle joint function ( P AOFAS = 0.690, P ATRS = 0.759). Regression analysis results of factors affecting postoperative ankle joint function The results of univariate logistic regression analysis for each variable (Tables and ) revealed significant differences among the groups of variables, including the surgical method, the length of the Achilles tendon defect, and defect area of the heel area of the affected limb ( P < 0.05). Collinearity diagnostic analysis of the above indicators revealed no collinearity among the indicators (variance inflation factor < 10). Further multifactorial logistic regression analysis revealed (Tables and ) that the surgical method ( OR = 49.725, 95% CI : 16.996 ~ 145.478) and defect area of the heel region ( OR = 0.947, 95% CI : 0.903 ~ 0.992) were independent risk factors affecting patients’ long-term follow-up ankle joint function postoperatively. Discussion The skin and soft tissue of the heel area are thin and have poor mobility, making them prone to necrosis and defects after trauma, which can lead to the exposure of deep tissues, such as the Achilles tendon, bone, and joints. Improper handling can easily result in infection and osteomyelitis, requiring multiple surgeries and leading to a high disability rate. In Achilles tendon reconstruction surgery, consideration must be given to restoring the shape of the Achilles tendon while maintaining its original biomechanical structure, requiring the graft material to be wear resistant. The unique functional and anatomical characteristics of the heel region demand high standards for the appearance, stability, wear and pressure resistance, and sensory requirements of the repaired skin and soft tissue. The reconstruction of Achilles tendon defects combined with surrounding tissue loss presents a significant challenge. Currently, a wide variety of surgical options are available for repairing open Achilles tendon defects, both domestically and internationally. However, no unified criteria for indications and contraindications have been established. Traditional treatments often involve flap coverage of soft tissue defects in the heel area, followed by secondary Achilles tendon reconstruction. Although this approach reduces the difficulty of surgery, it results in an extended treatment cycle, with old Achilles tendon repairs being complex and more prone to complications, such as tendon adhesion, stiffness, severe retraction, and increased postoperative infection rates. Since Taylor et al. applied the anastomosed vascular iliac-inguinal composite flap for the repair and reconstruction of Achilles tendon and heel region soft tissue composite defects in 1979, setting a precedent for one-stage repair of composite tissue defects in the heel area with free composite tissue, one-stage repair has been increasingly applied in clinical practice. Some scholars have compared the postoperative pathological tissue of the Achilles tendon in patients who have undergone different surgical methods: in patients with one-stage Achilles tendon repair, postoperative inflammatory infiltration in the Achilles tendon tissue is not significant and anatomical landmarks are clear, whereas in patients undergoing staged surgery, fibrous scar proliferation at the Achilles tendon stump, reduced elasticity and toughness, and loss of moisture are observed. Although one-stage repair requires specific microsurgical techniques and has greater surgical complexity, it can reduce the occurrence of complications in secondary reconstruction. This study compared the long-term postoperative follow-up ankle joint function scores between the two groups of patients (Table ). The results revealed that the postoperative follow-up ankle joint function scores were significantly better in the one-stage repair group than in the staged reconstruction group ( P AOFAS < 0.001; P ATRS < 0.001). Additionally, logistic regression analysis was performed on various variables (Tables and ), and the results revealed that the surgical method ( OR = 49.725, 95% CI : 16.996 ~ 145.478) was an independent risk factor affecting the postoperative ankle joint function of patients. One-stage microsurgical repair allows for precise repair of damaged tissues, promoting early regeneration and repair of Achilles tendon cells. This approach effectively avoids the challenges of reconstructing and repairing neglected Achilles tendon defects, meeting the current requirements for ideal Achilles tendon repair and reconstruction surgery: the healing period should not be excessively long; otherwise, local adhesion and inflammation can affect Achilles tendon function. The repaired area should have appropriate local tensile strength soon after surgery, thus facilitating early functional exercise of the patient's ankle joint. Clinicians should strive to choose one-stage repair for open Achilles tendon defects to reduce the degree of psychological trauma and economic burden on patients, resulting in greater social benefits. Traditional free anterolateral femoral flap repairs are bloated in appearance, and patients often face problems such as low flexibility of the ankle joint, poor dorsiflexion and plantar flexion function of the affected foot, difficulty wearing shoes, and the need for a second surgery to thin the flap. In contrast, the repair of Achilles tendon defects using descending genicular artery free flap transplantation with the adductor magnus tendon has the following advantages: a consistent anatomical donor site, sizeable external diameter of the blood vessels, long vascular pedicle, minimal damage to the donor site, and a relatively thick and close shape and tensile strength to the standard Achilles tendon. While repairing the Achilles tendon and skin defects, it is even possible to reconstruct the insertion point of the Achilles tendon. In addition, the reconstructed skin of the heel region is smooth and has a good appearance, and the flap is thin, thus facilitating wearing of the shoe with good abrasion resistance . In this study, the two microsurgical repair methods in the one-stage reconstruction repair group were further compared. The results (Table ) revealed no significant difference in the overall postoperative follow-up ankle joint function score ( P AOFAS = 0.792). However, in terms of daily life, the ability to walk on uneven surfaces, the ability to climb stairs quickly, abnormal gait, plantar flexion and dorsiflexion, and inversion and eversion, descending genicular artery free flap transplantation with the adductor magnus tendon was superior to vascular anastomosed fascia lata free anterolateral thigh perforator flap transplantation. This finding shows that the traditional anterolateral thigh free flap has advantages in repairing sizeable soft tissue defects in open Achilles tendon defects. For patients with small soft tissue defects, the use of descending genicular artery free flap transplantation with the adductor magnus tendon results in better ankle joint flexibility postoperatively. This finding suggests that when one-stage microsurgical repair and reconstruction are selected for open Achilles tendon defects, the choice of the one-stage microsurgical procedure should be based on the characteristics of the injury in the heel region to optimise the prognosis of the patient's affected limb function. Achilles tendon reconstruction and repair can be categorised into two types: those with a blood supply and those without a blood supply. Some scholars have conducted meta-analyses comparing these two surgical methods and have suggested that the use of tendon grafts with a blood supply can promote the healing of the Achilles tendon ends. Soft tissue with good vascularization covering the tendon can improve the wear resistance and gliding ability of the Achilles tendon . Other scholars have conducted comparative studies using New Zealand rabbits to repair Achilles tendon defects with vascularised peroneus longus muscle tendon grafts. The transplanted tendons were repaired with a vascularised peroneus longus muscle tendon, and the tendons showed minimal adhesion to the surrounding tissue and good sliding function. Their tensile strength reached 67.7% of that of a standard Achilles tendon, and their stiffness was similar to that of a standard Achilles tendon. The tensile strength of the reconstructed tendon without vascularization is only 35.5% of that of a standard Achilles tendon, and its stiffness is far from that of a standard Achilles tendon. Achilles tendon repair with a vascular supply is undoubtedly advantageous and has become the consensus of most scholars. In this study, we compared and analysed the long-term ankle joint function of patients who underwent staged surgical repair via two sub-microsurgical methods. The results (Table ) revealed that the long-term ankle joint function of patients who underwent second-stage peroneus longus muscle tendon transfer with the lateral calcaneal artery was superior to that of patients in the second-stage nonvascularised flexor hallucis longus tendon transfer group ( P AOFAS < 0.001; P ATRS < 0.001). These results are consistent with the descriptions of the aforementioned scholars. In the repair of Achilles tendon defects, having a reconstructed tendon with a rich blood supply is crucial for a good patient prognosis. A healthy blood supply undoubtedly promotes granulation tissue formation and the proliferation of Achilles tendon fibroblasts, accelerates both exogenous and endogenous repair processes, facilitates early healing of the Achilles tendon, reduces adhesions with surrounding tissues, and aids in the early recovery of movement and gliding functions of the Achilles tendon. Selecting a tendon graft surgical method with an adequate blood supply is important, and efforts should be made during the surgery to preserve the paratenon (tendon sheath and microvessels) to the greatest extent possible to protect the blood supply of the Achilles tendon. Basic research indicates that in the repair process of Achilles tendon injuries, type III collagen, which has regenerative repair functions, predominates in the scar at the severed end. The scar tissue gradually transforms into tendon-like tissue, which can be seen as a transformation from newly formed type III collagen fibres to mature type I collagen fibres . The regenerative repair capacity of type III collagen fibres is closely related to the abundance of surrounding skin and soft tissue. In other words, a smaller injury and fewer soft tissue defects indicate stronger ability to produce new type III collagen fibres. Other scholars have indicated that in patients with large areas of heel region damage, the collagen fibres in the reconstructed area are sparse, disorganised, curved, and poorly aligned. In contrast, patients with smaller damaged areas have denser collagen fibres in the reconstructed area, with more organised fibre structures, and the fibre orientation aligns with the longitudinal direction of the tendon . This study revealed (Fig. ) a negative linear correlation between the area of heel defects and the long-term postoperative ankle joint function score of patients. Multivariate regression analysis revealed (Tables and ) that the defect area in the heel region is an independent risk factor for long-term postoperative ankle joint function in patients ( P ARTS = 0.023, OR ARTS = 0.947, 95% CI: 0.903 ~ 0.992), indicating that the area of the heel region defect plays an important role in assessing the severity of open Achilles tendon defects and predicting postoperative function. Therefore, during the intraoperative repair of soft tissue defects around the heel, repairing the deep soft tissue bed and peritendinous membrane of the Achilles tendon, as well as filling significant soft tissue defects around the heel with flaps as much as possible, is beneficial for Achilles tendon healing. Reasonable tension and periodic stress promote tendon healing . Some scholars believe that the length of the Achilles tendon defect directly correlates with the loss of type I collagen fibres, the repair time, the and tension load used to directly anastomose the two ends of the Achilles tendon. If repaired through grafting or transfer surgery, the reconstructed Achilles tendon conduction chain often differs significantly from the biomechanical conduction of the original chain, making it more challenging to maintain appropriate cyclical tension and resulting in poorer healing . In this study, the length of the Achilles tendon defect was linearly and negatively correlated with the patient's postoperative ankle joint function score (Fig. ). In the multivariate regression analysis (Tables and ), the length of the Achilles tendon defect was shown to be an independent risk factor affecting the patient's postoperative ankle joint function ( P AOFAS = 0.013, OR AOFAS = 0.731, 95% CI: 0.570 ~ 0.937). Appropriate cyclical stress can coordinate various healing mechanisms at the microscopic level and improve the structural reconstruction of Achilles tendon tissue. Excessive Achilles tendon defects, whether end-to-end anastomosis or transfer grafting, lead to excessive tension load or cyclical stress disorders, which affect patient prognosis. Therefore, the length of the Achilles tendon defect plays an important role in the assessment of injury during Achilles tendon reconstruction and in postoperative functional prediction. During Achilles tendon repair surgery, efforts should be made to restore the original biomechanical conduction chain of the Achilles tendon. When the defect length is significant, tendon grafting or transfer reconstruction should be used to prevent excessive anastomotic tension and poor prognosis. A certain amount of tension should also be maintained during anastomosis to avoid laxity. Clinical classifications usually guide surgical treatment, and the Myerson classification divides Achilles tendon defects into the following: type I, with defects less than 2 cm in size; type II, with defects between 2 and 5 cm in size; and type III, with defects more than 5 cm in size. The analysis of the correlation of the Achilles tendon injury classification (Fig. ) in this study revealed that patients with higher-grade injuries had lower long-term postoperative ankle joint function scores than those with lower-grade injuries. Indeed, the length of the Achilles tendon defect and the defect area in the heel region are important criteria for assessing the severity of Achilles tendon defects according to the Myerson classification. Therefore, we believe that the Myerson classification is correlated with long-term postoperative ankle function in patients, likely because of differences in the length of the Achilles tendon defect and the defect area of the heel region. Patients with open Achilles tendon defects have thin skin around the heel region and poor blood supply. After suffering high-energy injuries, they are prone to ischaemia and necrosis, which subsequently lead to wound infection. If flap coverage treatment is not applied, the patient’s prognosis is usually poor. This study analysed the correlation between preoperative wound infection and long-term postoperative ankle joint function in the affected limb (Fig. ). The results revealed no correlation between the two scores (P AOFAS = 0.690, P ATRS = 0.759). Because the duration of preoperative infection in patients with open Achilles tendon defects does not exceed 4 weeks, its impact on the prognosis may be minimal. Of course, we cannot rule out systematic errors due to the small sample size. Currently, the comparability of various studies on the repair of open Achilles tendon defects, both domestically and internationally, is relatively low. The evaluation indicators are primarily subjective and lack objective evaluation criteria. Among the few literature reports with objective evaluation standards, most are single-centre retrospective studies with small sample sizes, which limits the role of statistical analysis. A total of 147 patients were included in this study, of whom 117 were male and 30 were female, with an average age of 43 (36, 56) years. The causes of injury included 73 traffic accidents, including 52 wheel spoke injuries and 21 other traffic accidents; 36 crush injuries; 38 mechanical injuries; all injuries were open injuries. The area of the limb soft tissue defect was classified according to the Meyerson classification: Type I in 34 patients, Type II in 77 patients, and Type III in 36 patients. Open Achilles tendon defect wounds with infections were observed in 42 patients, whereas 105 patients had simple defects. All 147 patients were followed up for a period of 12– 36 (25.6 ± 1.8) months. All surgical incisions healed in the first stage without infection. Three months after surgery, colour Doppler ultrasound revealed good continuity of the Achilles tendon, with no complications, such as rerupture. Ten weeks after surgery, ankle dorsiflexion was 0°, and patients could walk in regular shoes without discomfort. At the final follow-up, the flap appearance was satisfactory, with good softness and elasticity and no scar contracture or significant pigmentation. The flap could withstand some degree of friction during walking and daily activities and had regained a protective sensation. All patients could perform single-leg heel raises and were satisfied with the surgical results. No complications, such as rerupture, delayed incision healing, incision nonhealing, infection, deep vein thrombosis, sensory reduction, Achilles tendon pain, Achilles tendon contracture, Achilles tendon laxity, or muscle hernia, were observed, and there were no abnormalities in the movement of the donor limb. The patients were divided into a one-stage reconstruction group (81 patients) and a staged reconstruction group (66 patients). Baseline data, such as age, sex, height, body weight, calculated body mass index, injury site (left or right), injury type (Myerson classification), length of Achilles tendon defect, area of skin and soft tissue defect, Achilles tendon thickness, surgery time, suture removal time, cause of injury, and presence of infection, were compared between the two groups (see Table for details). The test results revealed that, except for differences in the length of the Achilles tendon defect and the areas of the skin and soft tissue defects ( t L = 4.749, P L < 0.001; t S = 5.170, P S < 0.001), other baseline data did not significantly differ between the groups ( P > 0.05). A comparative analysis of the long-term postoperative follow-up ankle joint function scores of the two groups of patients (Table ) revealed that the postoperative follow-up ankle joint function scores of the one-stage reconstruction group were significantly better than those of the staged reconstruction group ( P AOFAS < 0.001; P ATRS < 0.001). Further comparative analysis of the two microsurgical methods within the one-stage reconstruction repair group (Table ) revealed no significant difference in the overall postoperative follow-up ankle joint function scores ( P AOFAS = 0.792; P ATRS < 0.001). However, in terms of daily activities, walking ability on uneven surfaces, ability to ascend stairs quickly, abnormal gait, plantar flexion and dorsiflexion, inversion and eversion, the one-stage technique of free flap transplantation of the descending genicular artery with the adductor magnus tendon was superior to the vascular anastomosed fascia lata free anterolateral thigh perforator flap transplantation. Additionally, the two surgical methods were compared in the staged surgery group. The results (Table ) of long-term follow-up of ankle joint function in patients revealed that the one-stage gastrocnemius neurocutaneous flap combined with a two-stage peroneus longus muscle tendon transfer with the lateral calcaneal artery was superior to the one-stage gastrocnemius neurocutaneous flap combined with a two-stage flexor hallucis longus tendon transfer ( P AOFAS < 0.001; P ATRS < 0.001). The correlations among the degree of preoperative wound infection, classification of heel area injury, size of the defect area in the heel region, length of the Achilles tendon defect, and a series of clinical characteristics with postoperative follow-up data related to ankle joint function were evaluated. The results (Figs. , ) revealed that the classification of heel area injury ( P < 0.001), the size of the defect area in the heel region ( P AOFAS < 0.001, R AOFAS = -0.397; P ATRS < 0.001, R ATRS = -0.436), and the length of the Achilles tendon defect ( P AOFAS < 0.001, R AOFAS = -0.429; P ATRS < 0.001, R ATRS = -0.280) were correlated with postoperative ankle joint function in the affected limb, whereas preoperative wound infection was not correlated with postoperative ankle joint function ( P AOFAS = 0.690, P ATRS = 0.759). The results of univariate logistic regression analysis for each variable (Tables and ) revealed significant differences among the groups of variables, including the surgical method, the length of the Achilles tendon defect, and defect area of the heel area of the affected limb ( P < 0.05). Collinearity diagnostic analysis of the above indicators revealed no collinearity among the indicators (variance inflation factor < 10). Further multifactorial logistic regression analysis revealed (Tables and ) that the surgical method ( OR = 49.725, 95% CI : 16.996 ~ 145.478) and defect area of the heel region ( OR = 0.947, 95% CI : 0.903 ~ 0.992) were independent risk factors affecting patients’ long-term follow-up ankle joint function postoperatively. Discussion The skin and soft tissue of the heel area are thin and have poor mobility, making them prone to necrosis and defects after trauma, which can lead to the exposure of deep tissues, such as the Achilles tendon, bone, and joints. Improper handling can easily result in infection and osteomyelitis, requiring multiple surgeries and leading to a high disability rate. In Achilles tendon reconstruction surgery, consideration must be given to restoring the shape of the Achilles tendon while maintaining its original biomechanical structure, requiring the graft material to be wear resistant. The unique functional and anatomical characteristics of the heel region demand high standards for the appearance, stability, wear and pressure resistance, and sensory requirements of the repaired skin and soft tissue. The reconstruction of Achilles tendon defects combined with surrounding tissue loss presents a significant challenge. Currently, a wide variety of surgical options are available for repairing open Achilles tendon defects, both domestically and internationally. However, no unified criteria for indications and contraindications have been established. Traditional treatments often involve flap coverage of soft tissue defects in the heel area, followed by secondary Achilles tendon reconstruction. Although this approach reduces the difficulty of surgery, it results in an extended treatment cycle, with old Achilles tendon repairs being complex and more prone to complications, such as tendon adhesion, stiffness, severe retraction, and increased postoperative infection rates. Since Taylor et al. applied the anastomosed vascular iliac-inguinal composite flap for the repair and reconstruction of Achilles tendon and heel region soft tissue composite defects in 1979, setting a precedent for one-stage repair of composite tissue defects in the heel area with free composite tissue, one-stage repair has been increasingly applied in clinical practice. Some scholars have compared the postoperative pathological tissue of the Achilles tendon in patients who have undergone different surgical methods: in patients with one-stage Achilles tendon repair, postoperative inflammatory infiltration in the Achilles tendon tissue is not significant and anatomical landmarks are clear, whereas in patients undergoing staged surgery, fibrous scar proliferation at the Achilles tendon stump, reduced elasticity and toughness, and loss of moisture are observed. Although one-stage repair requires specific microsurgical techniques and has greater surgical complexity, it can reduce the occurrence of complications in secondary reconstruction. This study compared the long-term postoperative follow-up ankle joint function scores between the two groups of patients (Table ). The results revealed that the postoperative follow-up ankle joint function scores were significantly better in the one-stage repair group than in the staged reconstruction group ( P AOFAS < 0.001; P ATRS < 0.001). Additionally, logistic regression analysis was performed on various variables (Tables and ), and the results revealed that the surgical method ( OR = 49.725, 95% CI : 16.996 ~ 145.478) was an independent risk factor affecting the postoperative ankle joint function of patients. One-stage microsurgical repair allows for precise repair of damaged tissues, promoting early regeneration and repair of Achilles tendon cells. This approach effectively avoids the challenges of reconstructing and repairing neglected Achilles tendon defects, meeting the current requirements for ideal Achilles tendon repair and reconstruction surgery: the healing period should not be excessively long; otherwise, local adhesion and inflammation can affect Achilles tendon function. The repaired area should have appropriate local tensile strength soon after surgery, thus facilitating early functional exercise of the patient's ankle joint. Clinicians should strive to choose one-stage repair for open Achilles tendon defects to reduce the degree of psychological trauma and economic burden on patients, resulting in greater social benefits. Traditional free anterolateral femoral flap repairs are bloated in appearance, and patients often face problems such as low flexibility of the ankle joint, poor dorsiflexion and plantar flexion function of the affected foot, difficulty wearing shoes, and the need for a second surgery to thin the flap. In contrast, the repair of Achilles tendon defects using descending genicular artery free flap transplantation with the adductor magnus tendon has the following advantages: a consistent anatomical donor site, sizeable external diameter of the blood vessels, long vascular pedicle, minimal damage to the donor site, and a relatively thick and close shape and tensile strength to the standard Achilles tendon. While repairing the Achilles tendon and skin defects, it is even possible to reconstruct the insertion point of the Achilles tendon. In addition, the reconstructed skin of the heel region is smooth and has a good appearance, and the flap is thin, thus facilitating wearing of the shoe with good abrasion resistance . In this study, the two microsurgical repair methods in the one-stage reconstruction repair group were further compared. The results (Table ) revealed no significant difference in the overall postoperative follow-up ankle joint function score ( P AOFAS = 0.792). However, in terms of daily life, the ability to walk on uneven surfaces, the ability to climb stairs quickly, abnormal gait, plantar flexion and dorsiflexion, and inversion and eversion, descending genicular artery free flap transplantation with the adductor magnus tendon was superior to vascular anastomosed fascia lata free anterolateral thigh perforator flap transplantation. This finding shows that the traditional anterolateral thigh free flap has advantages in repairing sizeable soft tissue defects in open Achilles tendon defects. For patients with small soft tissue defects, the use of descending genicular artery free flap transplantation with the adductor magnus tendon results in better ankle joint flexibility postoperatively. This finding suggests that when one-stage microsurgical repair and reconstruction are selected for open Achilles tendon defects, the choice of the one-stage microsurgical procedure should be based on the characteristics of the injury in the heel region to optimise the prognosis of the patient's affected limb function. Achilles tendon reconstruction and repair can be categorised into two types: those with a blood supply and those without a blood supply. Some scholars have conducted meta-analyses comparing these two surgical methods and have suggested that the use of tendon grafts with a blood supply can promote the healing of the Achilles tendon ends. Soft tissue with good vascularization covering the tendon can improve the wear resistance and gliding ability of the Achilles tendon . Other scholars have conducted comparative studies using New Zealand rabbits to repair Achilles tendon defects with vascularised peroneus longus muscle tendon grafts. The transplanted tendons were repaired with a vascularised peroneus longus muscle tendon, and the tendons showed minimal adhesion to the surrounding tissue and good sliding function. Their tensile strength reached 67.7% of that of a standard Achilles tendon, and their stiffness was similar to that of a standard Achilles tendon. The tensile strength of the reconstructed tendon without vascularization is only 35.5% of that of a standard Achilles tendon, and its stiffness is far from that of a standard Achilles tendon. Achilles tendon repair with a vascular supply is undoubtedly advantageous and has become the consensus of most scholars. In this study, we compared and analysed the long-term ankle joint function of patients who underwent staged surgical repair via two sub-microsurgical methods. The results (Table ) revealed that the long-term ankle joint function of patients who underwent second-stage peroneus longus muscle tendon transfer with the lateral calcaneal artery was superior to that of patients in the second-stage nonvascularised flexor hallucis longus tendon transfer group ( P AOFAS < 0.001; P ATRS < 0.001). These results are consistent with the descriptions of the aforementioned scholars. In the repair of Achilles tendon defects, having a reconstructed tendon with a rich blood supply is crucial for a good patient prognosis. A healthy blood supply undoubtedly promotes granulation tissue formation and the proliferation of Achilles tendon fibroblasts, accelerates both exogenous and endogenous repair processes, facilitates early healing of the Achilles tendon, reduces adhesions with surrounding tissues, and aids in the early recovery of movement and gliding functions of the Achilles tendon. Selecting a tendon graft surgical method with an adequate blood supply is important, and efforts should be made during the surgery to preserve the paratenon (tendon sheath and microvessels) to the greatest extent possible to protect the blood supply of the Achilles tendon. Basic research indicates that in the repair process of Achilles tendon injuries, type III collagen, which has regenerative repair functions, predominates in the scar at the severed end. The scar tissue gradually transforms into tendon-like tissue, which can be seen as a transformation from newly formed type III collagen fibres to mature type I collagen fibres . The regenerative repair capacity of type III collagen fibres is closely related to the abundance of surrounding skin and soft tissue. In other words, a smaller injury and fewer soft tissue defects indicate stronger ability to produce new type III collagen fibres. Other scholars have indicated that in patients with large areas of heel region damage, the collagen fibres in the reconstructed area are sparse, disorganised, curved, and poorly aligned. In contrast, patients with smaller damaged areas have denser collagen fibres in the reconstructed area, with more organised fibre structures, and the fibre orientation aligns with the longitudinal direction of the tendon . This study revealed (Fig. ) a negative linear correlation between the area of heel defects and the long-term postoperative ankle joint function score of patients. Multivariate regression analysis revealed (Tables and ) that the defect area in the heel region is an independent risk factor for long-term postoperative ankle joint function in patients ( P ARTS = 0.023, OR ARTS = 0.947, 95% CI: 0.903 ~ 0.992), indicating that the area of the heel region defect plays an important role in assessing the severity of open Achilles tendon defects and predicting postoperative function. Therefore, during the intraoperative repair of soft tissue defects around the heel, repairing the deep soft tissue bed and peritendinous membrane of the Achilles tendon, as well as filling significant soft tissue defects around the heel with flaps as much as possible, is beneficial for Achilles tendon healing. Reasonable tension and periodic stress promote tendon healing . Some scholars believe that the length of the Achilles tendon defect directly correlates with the loss of type I collagen fibres, the repair time, the and tension load used to directly anastomose the two ends of the Achilles tendon. If repaired through grafting or transfer surgery, the reconstructed Achilles tendon conduction chain often differs significantly from the biomechanical conduction of the original chain, making it more challenging to maintain appropriate cyclical tension and resulting in poorer healing . In this study, the length of the Achilles tendon defect was linearly and negatively correlated with the patient's postoperative ankle joint function score (Fig. ). In the multivariate regression analysis (Tables and ), the length of the Achilles tendon defect was shown to be an independent risk factor affecting the patient's postoperative ankle joint function ( P AOFAS = 0.013, OR AOFAS = 0.731, 95% CI: 0.570 ~ 0.937). Appropriate cyclical stress can coordinate various healing mechanisms at the microscopic level and improve the structural reconstruction of Achilles tendon tissue. Excessive Achilles tendon defects, whether end-to-end anastomosis or transfer grafting, lead to excessive tension load or cyclical stress disorders, which affect patient prognosis. Therefore, the length of the Achilles tendon defect plays an important role in the assessment of injury during Achilles tendon reconstruction and in postoperative functional prediction. During Achilles tendon repair surgery, efforts should be made to restore the original biomechanical conduction chain of the Achilles tendon. When the defect length is significant, tendon grafting or transfer reconstruction should be used to prevent excessive anastomotic tension and poor prognosis. A certain amount of tension should also be maintained during anastomosis to avoid laxity. Clinical classifications usually guide surgical treatment, and the Myerson classification divides Achilles tendon defects into the following: type I, with defects less than 2 cm in size; type II, with defects between 2 and 5 cm in size; and type III, with defects more than 5 cm in size. The analysis of the correlation of the Achilles tendon injury classification (Fig. ) in this study revealed that patients with higher-grade injuries had lower long-term postoperative ankle joint function scores than those with lower-grade injuries. Indeed, the length of the Achilles tendon defect and the defect area in the heel region are important criteria for assessing the severity of Achilles tendon defects according to the Myerson classification. Therefore, we believe that the Myerson classification is correlated with long-term postoperative ankle function in patients, likely because of differences in the length of the Achilles tendon defect and the defect area of the heel region. Patients with open Achilles tendon defects have thin skin around the heel region and poor blood supply. After suffering high-energy injuries, they are prone to ischaemia and necrosis, which subsequently lead to wound infection. If flap coverage treatment is not applied, the patient’s prognosis is usually poor. This study analysed the correlation between preoperative wound infection and long-term postoperative ankle joint function in the affected limb (Fig. ). The results revealed no correlation between the two scores (P AOFAS = 0.690, P ATRS = 0.759). Because the duration of preoperative infection in patients with open Achilles tendon defects does not exceed 4 weeks, its impact on the prognosis may be minimal. Of course, we cannot rule out systematic errors due to the small sample size. Currently, the comparability of various studies on the repair of open Achilles tendon defects, both domestically and internationally, is relatively low. The evaluation indicators are primarily subjective and lack objective evaluation criteria. Among the few literature reports with objective evaluation standards, most are single-centre retrospective studies with small sample sizes, which limits the role of statistical analysis. The skin and soft tissue of the heel area are thin and have poor mobility, making them prone to necrosis and defects after trauma, which can lead to the exposure of deep tissues, such as the Achilles tendon, bone, and joints. Improper handling can easily result in infection and osteomyelitis, requiring multiple surgeries and leading to a high disability rate. In Achilles tendon reconstruction surgery, consideration must be given to restoring the shape of the Achilles tendon while maintaining its original biomechanical structure, requiring the graft material to be wear resistant. The unique functional and anatomical characteristics of the heel region demand high standards for the appearance, stability, wear and pressure resistance, and sensory requirements of the repaired skin and soft tissue. The reconstruction of Achilles tendon defects combined with surrounding tissue loss presents a significant challenge. Currently, a wide variety of surgical options are available for repairing open Achilles tendon defects, both domestically and internationally. However, no unified criteria for indications and contraindications have been established. Traditional treatments often involve flap coverage of soft tissue defects in the heel area, followed by secondary Achilles tendon reconstruction. Although this approach reduces the difficulty of surgery, it results in an extended treatment cycle, with old Achilles tendon repairs being complex and more prone to complications, such as tendon adhesion, stiffness, severe retraction, and increased postoperative infection rates. Since Taylor et al. applied the anastomosed vascular iliac-inguinal composite flap for the repair and reconstruction of Achilles tendon and heel region soft tissue composite defects in 1979, setting a precedent for one-stage repair of composite tissue defects in the heel area with free composite tissue, one-stage repair has been increasingly applied in clinical practice. Some scholars have compared the postoperative pathological tissue of the Achilles tendon in patients who have undergone different surgical methods: in patients with one-stage Achilles tendon repair, postoperative inflammatory infiltration in the Achilles tendon tissue is not significant and anatomical landmarks are clear, whereas in patients undergoing staged surgery, fibrous scar proliferation at the Achilles tendon stump, reduced elasticity and toughness, and loss of moisture are observed. Although one-stage repair requires specific microsurgical techniques and has greater surgical complexity, it can reduce the occurrence of complications in secondary reconstruction. This study compared the long-term postoperative follow-up ankle joint function scores between the two groups of patients (Table ). The results revealed that the postoperative follow-up ankle joint function scores were significantly better in the one-stage repair group than in the staged reconstruction group ( P AOFAS < 0.001; P ATRS < 0.001). Additionally, logistic regression analysis was performed on various variables (Tables and ), and the results revealed that the surgical method ( OR = 49.725, 95% CI : 16.996 ~ 145.478) was an independent risk factor affecting the postoperative ankle joint function of patients. One-stage microsurgical repair allows for precise repair of damaged tissues, promoting early regeneration and repair of Achilles tendon cells. This approach effectively avoids the challenges of reconstructing and repairing neglected Achilles tendon defects, meeting the current requirements for ideal Achilles tendon repair and reconstruction surgery: the healing period should not be excessively long; otherwise, local adhesion and inflammation can affect Achilles tendon function. The repaired area should have appropriate local tensile strength soon after surgery, thus facilitating early functional exercise of the patient's ankle joint. Clinicians should strive to choose one-stage repair for open Achilles tendon defects to reduce the degree of psychological trauma and economic burden on patients, resulting in greater social benefits. Traditional free anterolateral femoral flap repairs are bloated in appearance, and patients often face problems such as low flexibility of the ankle joint, poor dorsiflexion and plantar flexion function of the affected foot, difficulty wearing shoes, and the need for a second surgery to thin the flap. In contrast, the repair of Achilles tendon defects using descending genicular artery free flap transplantation with the adductor magnus tendon has the following advantages: a consistent anatomical donor site, sizeable external diameter of the blood vessels, long vascular pedicle, minimal damage to the donor site, and a relatively thick and close shape and tensile strength to the standard Achilles tendon. While repairing the Achilles tendon and skin defects, it is even possible to reconstruct the insertion point of the Achilles tendon. In addition, the reconstructed skin of the heel region is smooth and has a good appearance, and the flap is thin, thus facilitating wearing of the shoe with good abrasion resistance . In this study, the two microsurgical repair methods in the one-stage reconstruction repair group were further compared. The results (Table ) revealed no significant difference in the overall postoperative follow-up ankle joint function score ( P AOFAS = 0.792). However, in terms of daily life, the ability to walk on uneven surfaces, the ability to climb stairs quickly, abnormal gait, plantar flexion and dorsiflexion, and inversion and eversion, descending genicular artery free flap transplantation with the adductor magnus tendon was superior to vascular anastomosed fascia lata free anterolateral thigh perforator flap transplantation. This finding shows that the traditional anterolateral thigh free flap has advantages in repairing sizeable soft tissue defects in open Achilles tendon defects. For patients with small soft tissue defects, the use of descending genicular artery free flap transplantation with the adductor magnus tendon results in better ankle joint flexibility postoperatively. This finding suggests that when one-stage microsurgical repair and reconstruction are selected for open Achilles tendon defects, the choice of the one-stage microsurgical procedure should be based on the characteristics of the injury in the heel region to optimise the prognosis of the patient's affected limb function. Achilles tendon reconstruction and repair can be categorised into two types: those with a blood supply and those without a blood supply. Some scholars have conducted meta-analyses comparing these two surgical methods and have suggested that the use of tendon grafts with a blood supply can promote the healing of the Achilles tendon ends. Soft tissue with good vascularization covering the tendon can improve the wear resistance and gliding ability of the Achilles tendon . Other scholars have conducted comparative studies using New Zealand rabbits to repair Achilles tendon defects with vascularised peroneus longus muscle tendon grafts. The transplanted tendons were repaired with a vascularised peroneus longus muscle tendon, and the tendons showed minimal adhesion to the surrounding tissue and good sliding function. Their tensile strength reached 67.7% of that of a standard Achilles tendon, and their stiffness was similar to that of a standard Achilles tendon. The tensile strength of the reconstructed tendon without vascularization is only 35.5% of that of a standard Achilles tendon, and its stiffness is far from that of a standard Achilles tendon. Achilles tendon repair with a vascular supply is undoubtedly advantageous and has become the consensus of most scholars. In this study, we compared and analysed the long-term ankle joint function of patients who underwent staged surgical repair via two sub-microsurgical methods. The results (Table ) revealed that the long-term ankle joint function of patients who underwent second-stage peroneus longus muscle tendon transfer with the lateral calcaneal artery was superior to that of patients in the second-stage nonvascularised flexor hallucis longus tendon transfer group ( P AOFAS < 0.001; P ATRS < 0.001). These results are consistent with the descriptions of the aforementioned scholars. In the repair of Achilles tendon defects, having a reconstructed tendon with a rich blood supply is crucial for a good patient prognosis. A healthy blood supply undoubtedly promotes granulation tissue formation and the proliferation of Achilles tendon fibroblasts, accelerates both exogenous and endogenous repair processes, facilitates early healing of the Achilles tendon, reduces adhesions with surrounding tissues, and aids in the early recovery of movement and gliding functions of the Achilles tendon. Selecting a tendon graft surgical method with an adequate blood supply is important, and efforts should be made during the surgery to preserve the paratenon (tendon sheath and microvessels) to the greatest extent possible to protect the blood supply of the Achilles tendon. Basic research indicates that in the repair process of Achilles tendon injuries, type III collagen, which has regenerative repair functions, predominates in the scar at the severed end. The scar tissue gradually transforms into tendon-like tissue, which can be seen as a transformation from newly formed type III collagen fibres to mature type I collagen fibres . The regenerative repair capacity of type III collagen fibres is closely related to the abundance of surrounding skin and soft tissue. In other words, a smaller injury and fewer soft tissue defects indicate stronger ability to produce new type III collagen fibres. Other scholars have indicated that in patients with large areas of heel region damage, the collagen fibres in the reconstructed area are sparse, disorganised, curved, and poorly aligned. In contrast, patients with smaller damaged areas have denser collagen fibres in the reconstructed area, with more organised fibre structures, and the fibre orientation aligns with the longitudinal direction of the tendon . This study revealed (Fig. ) a negative linear correlation between the area of heel defects and the long-term postoperative ankle joint function score of patients. Multivariate regression analysis revealed (Tables and ) that the defect area in the heel region is an independent risk factor for long-term postoperative ankle joint function in patients ( P ARTS = 0.023, OR ARTS = 0.947, 95% CI: 0.903 ~ 0.992), indicating that the area of the heel region defect plays an important role in assessing the severity of open Achilles tendon defects and predicting postoperative function. Therefore, during the intraoperative repair of soft tissue defects around the heel, repairing the deep soft tissue bed and peritendinous membrane of the Achilles tendon, as well as filling significant soft tissue defects around the heel with flaps as much as possible, is beneficial for Achilles tendon healing. Reasonable tension and periodic stress promote tendon healing . Some scholars believe that the length of the Achilles tendon defect directly correlates with the loss of type I collagen fibres, the repair time, the and tension load used to directly anastomose the two ends of the Achilles tendon. If repaired through grafting or transfer surgery, the reconstructed Achilles tendon conduction chain often differs significantly from the biomechanical conduction of the original chain, making it more challenging to maintain appropriate cyclical tension and resulting in poorer healing . In this study, the length of the Achilles tendon defect was linearly and negatively correlated with the patient's postoperative ankle joint function score (Fig. ). In the multivariate regression analysis (Tables and ), the length of the Achilles tendon defect was shown to be an independent risk factor affecting the patient's postoperative ankle joint function ( P AOFAS = 0.013, OR AOFAS = 0.731, 95% CI: 0.570 ~ 0.937). Appropriate cyclical stress can coordinate various healing mechanisms at the microscopic level and improve the structural reconstruction of Achilles tendon tissue. Excessive Achilles tendon defects, whether end-to-end anastomosis or transfer grafting, lead to excessive tension load or cyclical stress disorders, which affect patient prognosis. Therefore, the length of the Achilles tendon defect plays an important role in the assessment of injury during Achilles tendon reconstruction and in postoperative functional prediction. During Achilles tendon repair surgery, efforts should be made to restore the original biomechanical conduction chain of the Achilles tendon. When the defect length is significant, tendon grafting or transfer reconstruction should be used to prevent excessive anastomotic tension and poor prognosis. A certain amount of tension should also be maintained during anastomosis to avoid laxity. Clinical classifications usually guide surgical treatment, and the Myerson classification divides Achilles tendon defects into the following: type I, with defects less than 2 cm in size; type II, with defects between 2 and 5 cm in size; and type III, with defects more than 5 cm in size. The analysis of the correlation of the Achilles tendon injury classification (Fig. ) in this study revealed that patients with higher-grade injuries had lower long-term postoperative ankle joint function scores than those with lower-grade injuries. Indeed, the length of the Achilles tendon defect and the defect area in the heel region are important criteria for assessing the severity of Achilles tendon defects according to the Myerson classification. Therefore, we believe that the Myerson classification is correlated with long-term postoperative ankle function in patients, likely because of differences in the length of the Achilles tendon defect and the defect area of the heel region. Patients with open Achilles tendon defects have thin skin around the heel region and poor blood supply. After suffering high-energy injuries, they are prone to ischaemia and necrosis, which subsequently lead to wound infection. If flap coverage treatment is not applied, the patient’s prognosis is usually poor. This study analysed the correlation between preoperative wound infection and long-term postoperative ankle joint function in the affected limb (Fig. ). The results revealed no correlation between the two scores (P AOFAS = 0.690, P ATRS = 0.759). Because the duration of preoperative infection in patients with open Achilles tendon defects does not exceed 4 weeks, its impact on the prognosis may be minimal. Of course, we cannot rule out systematic errors due to the small sample size. Currently, the comparability of various studies on the repair of open Achilles tendon defects, both domestically and internationally, is relatively low. The evaluation indicators are primarily subjective and lack objective evaluation criteria. Among the few literature reports with objective evaluation standards, most are single-centre retrospective studies with small sample sizes, which limits the role of statistical analysis. Compared with a two-stage approach, microsurgical one-stage reconstruction for open Achilles tendon defects is more conducive to tendon healing, resulting in a better long-term prognosis of ankle joint function. This approach reduces psychological trauma and economic pressure on patients, thus yielding greater social benefits. The use of vascularised tendon tissue to repair Achilles tendon defects is a good choice that meets the needs of anatomically and physiologically functional reconstruction of the Achilles tendon. For open Achilles tendon defects, tendon transplantation surgery with a blood supply is preferred. During surgery, efforts should be made to restore the original biomechanical transmission chain of the Achilles tendon, maintain a certain level of periodic tension, repair the deep soft tissue bed and peritendon membrane of the Achilles tendon, and fill significant defects in the surrounding soft tissue with skin flaps as much as possible. Supplementary file 1. Supplementary file 2.
Morphological and histochemical identification of telocytes in adult yak epididymis
d060c115-79cb-44d3-9a10-b618f4fb9f4b
10066225
Anatomy[mh]
Telocytes (TCs) are a newly discovered type of mesenchymal cells with unique morphological characteristics that have a long cytoplasmic extensions called telopodes (TPs). TPs are slender, long, and varied in number. Their shapes are mainly irregular ellipsoid, pear, and spindle; they are rich in mitochondria, the endoplasmic reticulum, and have a secretory function – . The unique morphological characteristics of TCs make them different from other mesenchymal cells. The TPs are in close contact with blood vessels, nerve bundles, and local immune system cells through organ matrix distribution, forming a network between tissues; this network was considered to be the structural basis for cell communication , . Furthermore, TCs may also establish unique spatial relationships with a variety of cells, including adjacent parenchymal cells and other cells in the interstitial compartment, and are considered to regulate the dynamic balance of the local microenvironment by contacting or releasing secretory vesicles between cells – . These secretory vesicles are considered to be “messengers” of substance exchange and signal transduction between TCs and other cells , , and are also an important reason why TCs are thought to be secretory cells. As early as a century ago, a special cell group in the muscular layer of the human intestinal tract was discovered by Cajal and named as “interstitial neurons” . Telocytes were named by Popescu and Faussone-Pellegrini at the beginning of this century . For many years, research on telocytes was based on imaging their ultrastructure under a transmission electron microscope, so transmission electron microscopy is considered the “gold standard” for research into TC morphology. TCs have been found in the human myocardium and gallbladder; CD34, c-kit (CD117), and vimentin were co-expression in TCs, and involved in stem cell differentiation, the coordination of new angiogenesis, and the regulation of paracrine function in the interstitial tissue , . Furthermore, TCs were abundant in the mouse lung and rat kidney , . TCs were found in the fish brain and in poultry skin . TCs have been discovered to be involved in many functional aspects in recent years, including cell regeneration , inhibition of apoptosis , inflammation repair , cell communication , angiogenesis , , and stem cell function . In previous studies, CD34, CD117, and vimentin were widely considered effective markers of TCs – . The proteins are also widely used to locate and screen TCs. CD34 is a marker receptor found on the surface of mesenchymal stem cells; it is also a highly glycosylated type I transmembrane glycoprotein expressed on the surface of hematopoietic stem/progenitor cells of humans and other mammals – . CD117 (C-kit) is a marker of stem cells/progenitor cells in the heart and is considered one of the effective markers of TCs . Vimentin, a conserved type III intermediate filament protein, is often found in fibroblasts, vascular endothelial cells, neutrophils, and macrophages, and is abundantly expressed in glomeruli, tubules, and renal interstitial cells . Vimentin is a marker of mesenchymal phenotype and an important cytoskeletal protein. It is generally expressed only in mesenchymal cells and is closely related to the growth, invasion, and metastasis of tumor cells . For a long time, studies of TCs have been based on some stem cell characteristics, with the stem cell surface markers such as CD34 and CD117 generally considered the marker proteins of TCs. Vimentin is a phenotypic marker of interstitial tissue, which is often used to locate TCs in the interstitial tissue. Yak has a reputation as a ‘plateau boat’ and ‘omnipotent livestock’ in the Plateau pastoral area , . The internal environment and functions of epididymis affected by high altitude hypoxia environment, while the epididymis is an important place for sperm maturation, processing, and storage , . For example, the activity of the enzyme acrosome and its change in reactivity in hypoxic conditions damages sperm fertilization ability – . Therefore, our study aimed to determine the distribution location of TCs in the epididymis of yak and analyzed the ultrastructure of TCs, to provide new clues or information as to the function of TCs in plateau animals. Ultrastructural characteristics of yak epididymis TCs under TEM TEM is the most effective method to identify TCs. TEM observation showed that the TCs in the epididymis of yak contained a large nucleus of indefinite shape; the most typical were ellipsoid, serrated, and pear-shaped. The nucleus, with obvious chromatin, was surrounded by a small amount of cytoplasm, rich in secretory vesicles and mitochondria. There were a large number of TPs composed of long cytoplasmic fragments (Fig. A,B). Most of the TCs in the epididymis of yak are distributed around the blood vessels and can be clearly observed to contact the blood vessels through TPs and may form special cell connections (Fig. C,D,E, and H). In addition, some of the TCs distributed in the interstitium of the epididymis were in contact with peritubular myoid cells and fibroblasts (Fig. F,G). The morphology of TCs in the corpus epididymis was similar to that in caput epididymis, which is characterized by a large body, thick cytoplasm, full nucleus, clear nucleolus, and with abundant secretory vesicles distributed around the nucleus of TCs (Fig. A–F). The nucleoli of TCs were clearly visible at high magnification and were also rich in extranuclear secretory vesicles and rough endoplasmic reticulum in Fig. G. Furthermore, there was also a slight difference in morphology between TCs distributed near the basal membrane and stroma in the cauda (Fig. A–G). TCs in the stroma had more plump cells, longer TPs, and numerous secretory vesicles, while the TCs outside the basement membrane had smaller bodies, larger nuclei, irregular strip shapes, and shorter TPs, they were also closely connected with many epithelial cells (Fig. A, B, and F). TPs as a signal communication tool extend to a variety of cells, forming a special network structure (Fig. B and F). Morphological structure of yak epididymis TCs under SEM The SEM photographs were colored using Adobe Photoshop 2020 software. It was found that TCs had complete cell morphology with the presence of obvious cytoplasmic processes on TPs. TCs usually attached to the epithelium or connected to other cells through TPs, but exist alone in the epididymis stroma. Moreover, the vast interactions can be used to make a complex TPs network between adjacent epithelial cells, and cell secretions are attached to TPs (F g. A–I). Morphological model of telocytes The morphological structure and the morphological model diagram of TCs in yak epididymis were proposed based on the results of TEM and SEM (Figs. and ). TCs special staining results The toluidine blue staining results showed that a few TCs were distributed in the interstitium and more were distributed near the microvascular of the epididymis. However, they had different structures, adopting ellipsoid, spindle, pear, or other forms (Fig. A–C). Mercury–bromophenol blue staining showed that TPs were more obvious and the cytoplasmic processes on TPs were stained dark blue (Fig. D–F). Immunohistochemical and immunofluorescence analysis of TC surface markers The immunohistochemical results showed that thick epithelial cells, mesenchymal cells, and capillaries were observed in the epididymis of yak. CD34 was strongly expressed in the interstitium and distributed mainly in the interstitium and near the epithelium (Fig. A–C). Compared with the caput and cauda, the CD34-positive intensity in the epididymis corpus was higher (Fig. B). Vimentin staining was strong positive in the caput, corpus, and cauda of yak epididymis. It is worth noting that vimentin has intensely positive expression in the stroma, epithelium, and microvascular wall of the epididymis (Fig. D–F). CD117 had strong positive expression in the caput, corpus, and cauda of yak epididymis, and the specificity was stronger than that of CD34 and vimentin. CD117 immunopositive cells were mainly distributed in the epithelial cytoplasm and mesenchymal cells (F g. G–I). The immunofluorescence results showed that vimentin was widely expressed in the epididymis of yak, including fibroblasts, perivascular muscle-like cells and vascular endothelial cells, with strong positive expression (Fig. ). These cells are present in the loose connective tissue around the tubulointerstitium. CD34 was positively distributed in the stroma and epithelium (Fig. ). The CD34-positive oval cells near the epithelium were of a short shape, with cytoplasmic processes, which may be dendritic cells or TCs, and the positive expression was stronger in the caput. The co-expression of vimentin/CD34 was observed in the interstitium, capillary, and epithelium of yak epididymis (Fig. ). There was strong positive expression of CD117 in the stroma and epithelium, with a large number of CD117-positive cells around the blood vessels of mesenchymal hair cells, and relatively few in the epithelium (Fig. ). The cells with strong positive CD34 expression and weakly positive CD117 expression appeared in the yak epididymis corpus; they may be phagocytes or lymphocytes (Fig. ). CD117/CD34 co-expression was observed in the interstitial epididymis, as shown by yellow fluorescence (Fig. ). The common features were proved by the co-expression of vimentin/CD34 and CD117/CD34 and showed that the cells had a long cytoplasmic extension and a large nucleus, and that most of them were distributed around the interstitial capillaries of yak epididymis and outside the epididymal epithelium. The obtained morphology conformed to the basic characteristics expected for TCs. The mRNA and protein expression of effective markers of telocytes in yak epididymis The expression of CD34, Vimentin, and CD117 mRNA in yak epididymis was detected by qRT-PCR. The expression of CD34 mRNA in caput epididymis tissues of yaks compared to corpus and cauda showed an elevated trend ( p < 0.01). Although there was no significant difference in the expression trend in vimentin mRNA in caput and corpus, it was significantly higher than that in the cauda epididymis ( p < 0.01). Similarly, CD117 mRNA expression in yak caput epididymis was also significantly higher than that in the corpus and cauda ( p < 0.01) (Fig. ). Western blotting results showed that the expression levels of the proteins of CD34, vimentin, and CD117 in the caput epididymis of yak were significantly higher than those in the corpus and cauda (Fig. ); moreover, protein expression trends patterns were consistent with mRNA expression. In summary, the relatively high gene transcription and protein translation levels of TC surface markers in the caput epididymis of yaks were probably closely related to its physiological function. TEM is the most effective method to identify TCs. TEM observation showed that the TCs in the epididymis of yak contained a large nucleus of indefinite shape; the most typical were ellipsoid, serrated, and pear-shaped. The nucleus, with obvious chromatin, was surrounded by a small amount of cytoplasm, rich in secretory vesicles and mitochondria. There were a large number of TPs composed of long cytoplasmic fragments (Fig. A,B). Most of the TCs in the epididymis of yak are distributed around the blood vessels and can be clearly observed to contact the blood vessels through TPs and may form special cell connections (Fig. C,D,E, and H). In addition, some of the TCs distributed in the interstitium of the epididymis were in contact with peritubular myoid cells and fibroblasts (Fig. F,G). The morphology of TCs in the corpus epididymis was similar to that in caput epididymis, which is characterized by a large body, thick cytoplasm, full nucleus, clear nucleolus, and with abundant secretory vesicles distributed around the nucleus of TCs (Fig. A–F). The nucleoli of TCs were clearly visible at high magnification and were also rich in extranuclear secretory vesicles and rough endoplasmic reticulum in Fig. G. Furthermore, there was also a slight difference in morphology between TCs distributed near the basal membrane and stroma in the cauda (Fig. A–G). TCs in the stroma had more plump cells, longer TPs, and numerous secretory vesicles, while the TCs outside the basement membrane had smaller bodies, larger nuclei, irregular strip shapes, and shorter TPs, they were also closely connected with many epithelial cells (Fig. A, B, and F). TPs as a signal communication tool extend to a variety of cells, forming a special network structure (Fig. B and F). The SEM photographs were colored using Adobe Photoshop 2020 software. It was found that TCs had complete cell morphology with the presence of obvious cytoplasmic processes on TPs. TCs usually attached to the epithelium or connected to other cells through TPs, but exist alone in the epididymis stroma. Moreover, the vast interactions can be used to make a complex TPs network between adjacent epithelial cells, and cell secretions are attached to TPs (F g. A–I). The morphological structure and the morphological model diagram of TCs in yak epididymis were proposed based on the results of TEM and SEM (Figs. and ). The toluidine blue staining results showed that a few TCs were distributed in the interstitium and more were distributed near the microvascular of the epididymis. However, they had different structures, adopting ellipsoid, spindle, pear, or other forms (Fig. A–C). Mercury–bromophenol blue staining showed that TPs were more obvious and the cytoplasmic processes on TPs were stained dark blue (Fig. D–F). The immunohistochemical results showed that thick epithelial cells, mesenchymal cells, and capillaries were observed in the epididymis of yak. CD34 was strongly expressed in the interstitium and distributed mainly in the interstitium and near the epithelium (Fig. A–C). Compared with the caput and cauda, the CD34-positive intensity in the epididymis corpus was higher (Fig. B). Vimentin staining was strong positive in the caput, corpus, and cauda of yak epididymis. It is worth noting that vimentin has intensely positive expression in the stroma, epithelium, and microvascular wall of the epididymis (Fig. D–F). CD117 had strong positive expression in the caput, corpus, and cauda of yak epididymis, and the specificity was stronger than that of CD34 and vimentin. CD117 immunopositive cells were mainly distributed in the epithelial cytoplasm and mesenchymal cells (F g. G–I). The immunofluorescence results showed that vimentin was widely expressed in the epididymis of yak, including fibroblasts, perivascular muscle-like cells and vascular endothelial cells, with strong positive expression (Fig. ). These cells are present in the loose connective tissue around the tubulointerstitium. CD34 was positively distributed in the stroma and epithelium (Fig. ). The CD34-positive oval cells near the epithelium were of a short shape, with cytoplasmic processes, which may be dendritic cells or TCs, and the positive expression was stronger in the caput. The co-expression of vimentin/CD34 was observed in the interstitium, capillary, and epithelium of yak epididymis (Fig. ). There was strong positive expression of CD117 in the stroma and epithelium, with a large number of CD117-positive cells around the blood vessels of mesenchymal hair cells, and relatively few in the epithelium (Fig. ). The cells with strong positive CD34 expression and weakly positive CD117 expression appeared in the yak epididymis corpus; they may be phagocytes or lymphocytes (Fig. ). CD117/CD34 co-expression was observed in the interstitial epididymis, as shown by yellow fluorescence (Fig. ). The common features were proved by the co-expression of vimentin/CD34 and CD117/CD34 and showed that the cells had a long cytoplasmic extension and a large nucleus, and that most of them were distributed around the interstitial capillaries of yak epididymis and outside the epididymal epithelium. The obtained morphology conformed to the basic characteristics expected for TCs. The expression of CD34, Vimentin, and CD117 mRNA in yak epididymis was detected by qRT-PCR. The expression of CD34 mRNA in caput epididymis tissues of yaks compared to corpus and cauda showed an elevated trend ( p < 0.01). Although there was no significant difference in the expression trend in vimentin mRNA in caput and corpus, it was significantly higher than that in the cauda epididymis ( p < 0.01). Similarly, CD117 mRNA expression in yak caput epididymis was also significantly higher than that in the corpus and cauda ( p < 0.01) (Fig. ). Western blotting results showed that the expression levels of the proteins of CD34, vimentin, and CD117 in the caput epididymis of yak were significantly higher than those in the corpus and cauda (Fig. ); moreover, protein expression trends patterns were consistent with mRNA expression. In summary, the relatively high gene transcription and protein translation levels of TC surface markers in the caput epididymis of yaks were probably closely related to its physiological function. Homeostasis of the epididymal microenvironment guarantees male animal reproductive ability. As a type of interstitial cell, TCs play an important role in the epididymis immune microenvironment homeostasis and blood–epididymis barrier function. TCs have been found in many animal tissues, such as the human colon , fish brain , and poultry skin, and TCs have common features: slender cytoplasmic extensions and a large nucleus. In addition, TCs were also found in the testis of rats , rabbits , and camels . TCs found in our study have common morphological characteristics with previous studies. The large nucleus is one of the features of TCs, and serrated nuclei of TCs have so far been described only in camel testes, whereas the same was present in the serrated nuclei of the TCs in yak cauda. TEM has been considered to be the most effective way to differentiate TCs from other mesenchymal and epithelial cells . In this study, the TEM ultrastructure of TCs in different locations of the yak epididymis were identified. The TCs distributed around the capillaries or near the basement membrane had differences in morphology. The former has more full cell bodies and TPs are relatively long, whereas the latter had contrasting features. Some scholars have confirmed that TCs around the capillaries may be associated with angiogenesis and material exchange . TCs near the epididymis basement membrane were closely connected with the peritubular myoid cells, which provide structural support for the epididymal duct, and may participate in the contraction of the epididymal duct, providing an impetus for the transport of sperm. The TEM analysis of the ultrastructure revealed that there were multitudinous secretory vesicles in the TPs of yak epididymis TCs; these were released to the surrounding environment of TCs in the form of exocytosis and were easily observed by TEM. Our findings show that secretory vesicles also exist in TCs found in the human testis, myocardium, ovary, and other tissues. Considered the “messengers” of TCs that communicate with the outside world , this feature is regarded as one of the most important conditions for distinguishing TCs from other interstitial cells in ultrastructure. We found TCs in the interstitium of the yak caput epididymis by SEM were mainly distributed in the interstitium and around the blood vessels. The differently sized TPs from TC cell bodies, as well as the network structure composed of dense TPs, can be observed by SEM, which is also one of the important characteristics of TCs. In this study, we explored multiple special staining methods and found that mercury–bromophenol blue stained TPs and the cytoplasmic processes dark blue. The cytoplasmic processes of TCs have the function of secreted proteins, and TCs in the yak epididymis of during the estrous season may be involved in the reproductive regulation of the epididymis through the secretion of related proteins by the cytoplasmic processes of TPs; thus, this suggests the need for further study of the seasonal variation characteristics of TCs. Studies have shown that immunohistochemistry can be used as a basic method to localize TCs . Because there was strong expression of vimentin in the stroma and epithelium of epididymis, the immunohistochemistry and double immunofluorescence co-localization were designed to determine the location of TCs. We found that the cells with double-positive expression of vimentin/CD34 and CD117/CD34 were similar to the TCs phenotype in the stroma and capillary of yak epididymis. These cells had a long cytoplasmic extension and oval nuclei, which were consistent with the basic characteristics of TCs. Similar results were found in testis of humans , dove , and Pelodiscus sinensis . We found that the mRNA and protein expression of CD34, vimentin, and CD117 were relatively high in the yak caput epididymis. The caput epididymis plays an important role in sperm maturation and processing, especially in sperm concentration, maturation, and transport , . Compared with the corpus and cauda, the caput is more like a sperm “processing workshop”, suggesting that high expression of TCs markers in the epididymis caput may be related to sperm processing. Via double immunofluorescence staining, we also found that TCs were associated with dendritic cells, peritubular myoid cells, and lymphocytes through TPs. We believe that there is a specific network structure between TCs and the epithelium and stroma in the epididymis of yak, namely the TCs network. Following a previous discovery that the network structure of TCs in human testis may be related to the blood–testicular barrier , the TCs network of yak epididymis TCs may play an important role in the formation of yak blood–testicular barrier. Studies have found that TCs present in the epididymis of camels are positive for vascular endothelial growth factors , indicating that TCs are probably involved in angiogenesis in the epididymis of the camel. TCs promote angiogenesis by secreting extracellular vesicles containing microRNA , . Angiogenesis is a process by which monolayer endothelial cells (ECs) control the permeability of blood cells through material exchange, and maintain homeostasis of the vascular environment and a series of physiological phenomena leading to the formation of new blood vessels . Microvessels in the epididymis are considered to be the prerequisite for sperm maturation and transport . Interestingly, we found that TPs of TCs were always extremely close to the blood vessel and extended to the vascular wall. Combined with TEM evidence, we speculated that TCs may be a material exchange pump that play an important role in the process of material and energy exchange between interstitial and nutrient vessels in the epididymis. At present, more function of TCs are emerging , , such as injury repair , , vascular regeneration , and cell communication . However, there are few studies of TCs in animal reproduction. Studies have found that the secretory vesicles of TCs in camel testis are affected by seasonal changes, with more secretion in spring and less secretion in summer . There is some relationship between TCs secretion vesicles and camel estrus. However, there is insufficient evidence to prove TCs are involved with the regulation of animal estrus. Some researchers believe that TCs may indirectly affect the secretion and release of androgen by establishing intercellular connections with other interstitial cells to regulate the reproductive activities of male animals . Furthermore, studies reported that TCs express progesterone and estrogen receptors in the female gonadal axis – . The emergence of such studies is constantly drawing TCs into the field of reproductive study. Our study revealed the morphological structure and distribution location of TCs in yak reproductive organs and provided a reference for the study of TCs in animal reproduction in a hypoxic plateau environment. Animals and sample acquisition The epididymal tissue of adult healthy yak (n = 10; ≥ 3 years) was collected from designated slaughterhouses from July to August in Xining City, Qinghai Province (at an average altitude of 3100 m) China. Based on their anatomical characteristics, epididymis samples were divided into three parts: caput, corpus, and cauda. A portion of each sample was quickly frozen in liquid nitrogen, transported to the laboratory and then stored at −80°C for RNA and protein extraction; the remaining samples were stored in 4% paraformaldehyde and 2.5% glutaraldehyde separately for histological and ultrastructural study. All experimental animals were approved by the Animal Care and Use Committee of the Veterinary College of Gansu Agricultural University (Ratification number: GSAU-Eth-VMC-2021-010), and all methods were performed in accordance with the relevant guidelines and regulations. Drugs and reagents All experimental antibodies were purchased from commercial suppliers. Rabbit polyclonal antibody CD34 (bs-8996R), vimentin (bs-8533R), and CD117 (bs-1005R) were purchased from Beijing BIOSS Antibodies Co., Ltd, China. Goat anti-rabbit IgG H&L (ab150077, Alexa Fluor® 488; ab150079, Alexa Fluor® 647; ab150080, Alexa Fluor® 594) were provided by Abcam,Cambridge, UK. The DAB color reagent kit (PA110) was provided by Beijing TIANGEN Biotechnology Co., Ltd. The immunohistochemical staining kit (SP-0023) used was produced by ZYMED USA, Beijing BIOSS Biotechnology Co., Ltd. ECL Plus ultrasensitive luminescent solution (PE0010) was purchased from Solebao Biotechnology Co., Ltd. Sample preparation and observation Preparation of ordinary samples: Epididymal tissue samples (0.5 × 0.5 × 0.5 cm) were fixed with 4% paraformaldehyde solution and rinsed in running water for 24 h before gradient ethanol dehydration. Subsequently, samples were made transparent with xylene, embedded using an Epon 812 paraffin embedding machine, and 4-μm-thick serial sections were cut. Adjacent slices were used for toluidine blue staining, mercury–bromophenol blue staining, immunohistochemistry and immunofluorescence staining, respectively. SEM sample preparation: The epididymal tissue of yak was cut into 0.2 cm × 0.2 cm × 0.2 cm pieces and fixed with 2.5% glutaraldehyde for 2 days. The tissue was washed four times with 0.1 mol/L phosphate buffer; each wash was 15 min. The samples were treated with 1% OsO4 for 1 h and washed with double-distilled water six times (10 min each wash). Then, the tissue was treated with 2% tannic acid for 30 min and washed with double-distilled water six times (each wash 10 min), and then subjected to gradient ethanol dehydration (30%, 50%, 70%, 80%, 90%, 95%, and 100%; each stage 30 min) and then soaked in isoamyl acetate for 30 min. Tissues were dried by critical point drying, and then the samples were sprayed with gold and observed using a scanning electron microscope. TEM sample preparation: The epididymal tissue of yak fixed in 2.5% glutaraldehyde was cut into small pieces (0.2 cm × 0.2 cm × 0.2 cm) and fixed in 2% osmium tetraoxide at 4°C for 3 h. The pieces were dehydrated with a gradient acetone series (30%, 50%, 70%, 80%, 90%, 95%, and 100%) and then embedded in epoxy resin. Ultrathin sections were prepared and affixed to the copper mesh, stained with uranium acetate and lead citrate, and then examined using a JEM-100CX electron microscope (Japan NEC). Immunohistochemistry and immunofluorescence The epididymal tissues were embedded in paraffin and cut into 4-µm-thick sections. Paraffin sections were dewaxed and dehydrated, repaired with microwave oven antigen retrieval, blocked with 3% H2O2 solution for 10 min, and incubated with goat serum albumin for 15 min. Subsequently, 50 µL rabbit polyclonal antibody (CD34, Vimentin, and CD117) diluted to 1:300 was added to each slide; the negative control consisted of 0.01 mol/L PBS instead of the first antibody. The slides were incubated at 37 °C for 4 h, washed three times with phosphate buffer solution (PBS) (each wash 5 min) and then 50 µL of biotin-labeled goat anti-rabbit IgG working solution was added and the sections were incubated at 37 °C for 15 min and washed three times with PBS (each wash 5 min). Horseradish enzyme-labeled streptavidin solution was added and washed with PBS three times (each wash, 5 min). The DAB color developing solution was added for 5–20 min. Hematoxylin counterstaining was performed for 5 min; then, sections were dehydrated by an alcohol gradient, made transparent with xylene, and sealed with neutral gum. The sections were observed under a microscope. Immunofluorescence staining was performed with the primary antibody; sections were incubated at 37°C for 4 h and rinsed with PBS three times. Subsequent steps were completed in a dark room. Anti-Rabbit IgG H&L AF488 or AF594 (dilution ratio 1:1000) was added, incubated at 37°C for 1 h, and washed five times with PBS (each wash 5 min). Then, the second antibody was added and the sections were incubated at 37°C for 4 h, washed five times in PBS (each wash 5 min). Rabbit Anti-PHD2/AF647 (dilution ratio 1:1000) was added dropwise and incubated at 37°C for 1.5 h. After washing with PBS, DAPI was added dropwise and incubated in a dark room for 10 min. After further washing with PBS, the patch was sealed with a capping agent and the sections were observed under a laser confocal microscope. The negative control consisted of 0.01 mol/L PBS instead of the first antibody. The remaining conditions and steps were the same. qRT-PCR analysis The caput, corpus, and cauda tissues of yak epididymis stored at − 80 °C were removed from storage, and 0.1 g was weighed and placed into a mortar. Liquid nitrogen was added to the grind, and 1 mL Transzol was added to the shock treatment. Then 0.2 mL of chloroform was used to extract RNA and confirm its purity. cDNA was synthesized by RNA reverse transcription, and stored in a refrigerator at − 80 °C for further use. Primer Premier 5.0 software was used (Primer Biosoft International, Palo Alto, USA) was used to design primers; the primer sequence was obtained with reference to the NCBI database ( www.ncbi.nlm.nih.gov ), and the primer information is shown in Table . The β-actin gene was used as the internal reference. qRT-PCR was performed using a Light Cycler 480 thermocycler (Roche, Mannheim, Germany) in a final reaction volume of 20 μL, comprising 1 μL of cDNA, 1 μL of forward primer, 1 μL of reverse primer, 10 μL of 2 × SYBR Green II PCR mix (TaKaRa, Dalian, China), 0.4 μL of ROX reference dye, and 6.6 μL of nuclease-free H2O. The cycling reaction conditions were 95 °C for 30 s; followed by 95 °C, for 5 s, and 60 °C for 30 s each, for a total of 45 cycles. Three replicates were performed for each sample to ensure relative expression accuracy of the target genes. Western blotting analysis From the epididymal tissue of yak stored at − 80 °C, 0.1 g was collected and placed into a mortar. Liquid nitrogen was added and the tissue was ground into a fine powder with a pestle. Then, protein cracking liquid was added and samples were cracked on ice for 3 h after eddy shock. Tissue and cell lysates were centrifuged at 12,000 rpm at 4 °C for 15 min and the supernatant was stored at − 80 °C. Determination of protein concentration by BCA protein assay kit (PC0020, Solarbio Biotechnology Co., Ltd., Beijing, China), and all proteins were diluted to the same concentration. Protein samples (25 μg) were separated by SDS–polyacrylamide gel electrophoresis (SDS-PAGE) using a 5% stacking gel and 12% separating gel. After electrophoresis, the separation gel was cut according to the size of the target protein and referred to the Marker, and the cut target bands were then wet-transferred to the support membrane, and the membrane was incubated with primary antibodies (1:800) at 4 °C overnight and washed with Tris-buffered saline + Tween 20 (TBST). Horseradish peroxidase-labeled goat anti-rabbit IgG was used as the secondary antibody and the incubation was performed for 2 h at 37 °C in TBST buffer for 10 min. The polyvinylidene fluoride membrane was then subjected to chemiluminescence detection. Chemiluminescent substrate solutions A and B were mixed at a ratio of 1:1, and the reaction proceeded at 25 °C. The transfer membrane was photographed for analysis, with β-actin used as the internal reference. Statistical analysis Western blotting data were quantified by ImageJ software (National Institutes of Health, Maryland, USA). The qRT-PCR data were analyzed by the 2 − ΔΔCT method; the obtained results were subjected to the dominance test by SPSS 17.0 statistical software, and the histogram was plotted by GraphPad 9.0 software. Institutional review board statement The study is reported in accordance with ARRIVE guidelines. And all experimental animals were approved by the Animal Care and Use Committee of the Veterinary College of Gansu Agricultural University (Ratification number: GSAU-Eth-VMC-2020-016w). The epididymal tissue of adult healthy yak (n = 10; ≥ 3 years) was collected from designated slaughterhouses from July to August in Xining City, Qinghai Province (at an average altitude of 3100 m) China. Based on their anatomical characteristics, epididymis samples were divided into three parts: caput, corpus, and cauda. A portion of each sample was quickly frozen in liquid nitrogen, transported to the laboratory and then stored at −80°C for RNA and protein extraction; the remaining samples were stored in 4% paraformaldehyde and 2.5% glutaraldehyde separately for histological and ultrastructural study. All experimental animals were approved by the Animal Care and Use Committee of the Veterinary College of Gansu Agricultural University (Ratification number: GSAU-Eth-VMC-2021-010), and all methods were performed in accordance with the relevant guidelines and regulations. All experimental antibodies were purchased from commercial suppliers. Rabbit polyclonal antibody CD34 (bs-8996R), vimentin (bs-8533R), and CD117 (bs-1005R) were purchased from Beijing BIOSS Antibodies Co., Ltd, China. Goat anti-rabbit IgG H&L (ab150077, Alexa Fluor® 488; ab150079, Alexa Fluor® 647; ab150080, Alexa Fluor® 594) were provided by Abcam,Cambridge, UK. The DAB color reagent kit (PA110) was provided by Beijing TIANGEN Biotechnology Co., Ltd. The immunohistochemical staining kit (SP-0023) used was produced by ZYMED USA, Beijing BIOSS Biotechnology Co., Ltd. ECL Plus ultrasensitive luminescent solution (PE0010) was purchased from Solebao Biotechnology Co., Ltd. Preparation of ordinary samples: Epididymal tissue samples (0.5 × 0.5 × 0.5 cm) were fixed with 4% paraformaldehyde solution and rinsed in running water for 24 h before gradient ethanol dehydration. Subsequently, samples were made transparent with xylene, embedded using an Epon 812 paraffin embedding machine, and 4-μm-thick serial sections were cut. Adjacent slices were used for toluidine blue staining, mercury–bromophenol blue staining, immunohistochemistry and immunofluorescence staining, respectively. SEM sample preparation: The epididymal tissue of yak was cut into 0.2 cm × 0.2 cm × 0.2 cm pieces and fixed with 2.5% glutaraldehyde for 2 days. The tissue was washed four times with 0.1 mol/L phosphate buffer; each wash was 15 min. The samples were treated with 1% OsO4 for 1 h and washed with double-distilled water six times (10 min each wash). Then, the tissue was treated with 2% tannic acid for 30 min and washed with double-distilled water six times (each wash 10 min), and then subjected to gradient ethanol dehydration (30%, 50%, 70%, 80%, 90%, 95%, and 100%; each stage 30 min) and then soaked in isoamyl acetate for 30 min. Tissues were dried by critical point drying, and then the samples were sprayed with gold and observed using a scanning electron microscope. TEM sample preparation: The epididymal tissue of yak fixed in 2.5% glutaraldehyde was cut into small pieces (0.2 cm × 0.2 cm × 0.2 cm) and fixed in 2% osmium tetraoxide at 4°C for 3 h. The pieces were dehydrated with a gradient acetone series (30%, 50%, 70%, 80%, 90%, 95%, and 100%) and then embedded in epoxy resin. Ultrathin sections were prepared and affixed to the copper mesh, stained with uranium acetate and lead citrate, and then examined using a JEM-100CX electron microscope (Japan NEC). The epididymal tissues were embedded in paraffin and cut into 4-µm-thick sections. Paraffin sections were dewaxed and dehydrated, repaired with microwave oven antigen retrieval, blocked with 3% H2O2 solution for 10 min, and incubated with goat serum albumin for 15 min. Subsequently, 50 µL rabbit polyclonal antibody (CD34, Vimentin, and CD117) diluted to 1:300 was added to each slide; the negative control consisted of 0.01 mol/L PBS instead of the first antibody. The slides were incubated at 37 °C for 4 h, washed three times with phosphate buffer solution (PBS) (each wash 5 min) and then 50 µL of biotin-labeled goat anti-rabbit IgG working solution was added and the sections were incubated at 37 °C for 15 min and washed three times with PBS (each wash 5 min). Horseradish enzyme-labeled streptavidin solution was added and washed with PBS three times (each wash, 5 min). The DAB color developing solution was added for 5–20 min. Hematoxylin counterstaining was performed for 5 min; then, sections were dehydrated by an alcohol gradient, made transparent with xylene, and sealed with neutral gum. The sections were observed under a microscope. Immunofluorescence staining was performed with the primary antibody; sections were incubated at 37°C for 4 h and rinsed with PBS three times. Subsequent steps were completed in a dark room. Anti-Rabbit IgG H&L AF488 or AF594 (dilution ratio 1:1000) was added, incubated at 37°C for 1 h, and washed five times with PBS (each wash 5 min). Then, the second antibody was added and the sections were incubated at 37°C for 4 h, washed five times in PBS (each wash 5 min). Rabbit Anti-PHD2/AF647 (dilution ratio 1:1000) was added dropwise and incubated at 37°C for 1.5 h. After washing with PBS, DAPI was added dropwise and incubated in a dark room for 10 min. After further washing with PBS, the patch was sealed with a capping agent and the sections were observed under a laser confocal microscope. The negative control consisted of 0.01 mol/L PBS instead of the first antibody. The remaining conditions and steps were the same. The caput, corpus, and cauda tissues of yak epididymis stored at − 80 °C were removed from storage, and 0.1 g was weighed and placed into a mortar. Liquid nitrogen was added to the grind, and 1 mL Transzol was added to the shock treatment. Then 0.2 mL of chloroform was used to extract RNA and confirm its purity. cDNA was synthesized by RNA reverse transcription, and stored in a refrigerator at − 80 °C for further use. Primer Premier 5.0 software was used (Primer Biosoft International, Palo Alto, USA) was used to design primers; the primer sequence was obtained with reference to the NCBI database ( www.ncbi.nlm.nih.gov ), and the primer information is shown in Table . The β-actin gene was used as the internal reference. qRT-PCR was performed using a Light Cycler 480 thermocycler (Roche, Mannheim, Germany) in a final reaction volume of 20 μL, comprising 1 μL of cDNA, 1 μL of forward primer, 1 μL of reverse primer, 10 μL of 2 × SYBR Green II PCR mix (TaKaRa, Dalian, China), 0.4 μL of ROX reference dye, and 6.6 μL of nuclease-free H2O. The cycling reaction conditions were 95 °C for 30 s; followed by 95 °C, for 5 s, and 60 °C for 30 s each, for a total of 45 cycles. Three replicates were performed for each sample to ensure relative expression accuracy of the target genes. From the epididymal tissue of yak stored at − 80 °C, 0.1 g was collected and placed into a mortar. Liquid nitrogen was added and the tissue was ground into a fine powder with a pestle. Then, protein cracking liquid was added and samples were cracked on ice for 3 h after eddy shock. Tissue and cell lysates were centrifuged at 12,000 rpm at 4 °C for 15 min and the supernatant was stored at − 80 °C. Determination of protein concentration by BCA protein assay kit (PC0020, Solarbio Biotechnology Co., Ltd., Beijing, China), and all proteins were diluted to the same concentration. Protein samples (25 μg) were separated by SDS–polyacrylamide gel electrophoresis (SDS-PAGE) using a 5% stacking gel and 12% separating gel. After electrophoresis, the separation gel was cut according to the size of the target protein and referred to the Marker, and the cut target bands were then wet-transferred to the support membrane, and the membrane was incubated with primary antibodies (1:800) at 4 °C overnight and washed with Tris-buffered saline + Tween 20 (TBST). Horseradish peroxidase-labeled goat anti-rabbit IgG was used as the secondary antibody and the incubation was performed for 2 h at 37 °C in TBST buffer for 10 min. The polyvinylidene fluoride membrane was then subjected to chemiluminescence detection. Chemiluminescent substrate solutions A and B were mixed at a ratio of 1:1, and the reaction proceeded at 25 °C. The transfer membrane was photographed for analysis, with β-actin used as the internal reference. Western blotting data were quantified by ImageJ software (National Institutes of Health, Maryland, USA). The qRT-PCR data were analyzed by the 2 − ΔΔCT method; the obtained results were subjected to the dominance test by SPSS 17.0 statistical software, and the histogram was plotted by GraphPad 9.0 software. The study is reported in accordance with ARRIVE guidelines. And all experimental animals were approved by the Animal Care and Use Committee of the Veterinary College of Gansu Agricultural University (Ratification number: GSAU-Eth-VMC-2020-016w). Supplementary Information 1. Supplementary Information 2.
Medical malpractice in Oman: A 12-year retrospective record review
f4907e2d-5285-4227-ab6c-9a65b9503268
10446241
Internal Medicine[mh]
Medical malpractice is defined as a deviation from acceptable good standards of medical practice, which takes the form of acts or omissions and results in injury, harm, or death . Medical malpractice is handled differently in different countries. In the United States, malpractice cases are generally civil offenses , while in Japan and China, they are criminal offenses . In Italy, most claims are pursued through the civil court system rather than through criminal law . To date, there is a dearth of reports and analysis of malpractice claims or lawsuits in Arabian Gulf countries, where the recent economic growth of the fossil fuel and transshipment industries has resulted in rapidly improved standards of living and an exponential growth of the health industry. A prevalent issue in these countries is a labor shortage in the healthcare sectors, resulting in a dependency on an expatriate workforce . One such Arabian Gulf country is Oman, with an estimated 5 million strong population, of which approximately half are expatriate workers and their families . About 58% of the health workforce is Omani, and the remaining proportion of expatriate healthcare workers immigrate from different parts of the world, thus bringing with them their own cultures about healthcare safety standards and practices . In Oman, the Ministry of Health (MOH) is the principal provider of health care and is responsible for the supervision and coordination of government hospitals, public health centers, and private institutes . Oman has a universal free healthcare system, and most health services are run by the government (i.e. MOH). Oman aspires to fulfil the Sustainable Development Goals of the United Nations, including Goal 3 which aims to ensure healthy lives and promote well-being for all of all ages . The MOH established the Directorate General Quality Assurance Centre (DGQAC) to lead patient safety guidelines, policies and audit activities in Oman’s healthcare institutes, and the DGQAC was designated as a collaborating center by the World Health Organization (WHO) for quality and patient safety training. The MOH also adopted the WHO’s ‘Patient Safety Friendly Hospital Initiative’ (PSFHI) to reduce rates of preventable adverse outcomes in hospital settings . Some healthcare systems in Oman are accredited by international organizations that emphasize accountability in medical practice . All healthcare workers are expected to adhere to established standards of care, clinical guidelines and protocols . A few basic principles associated with patient safety and positive treatment outcomes include obtaining informed consent from patients, effective provider-to-patient communication, and maintaining accurate and comprehensive medical records. A study assessing healthcare professionals’ perceptions regarding patient safety culture in hospitals in Oman found that the safety culture dimensions with the lowest positive scores included ‘non-punitive response to error’, ‘staffing’, and ‘hand-offs and transitions’ . Another study by Al Balushi et al reported that significant factors associated with higher self-reported medical error rates among healthcare professionals included male gender, Omani nationality, younger age, occupational burnout, and exposure to work-related bullying . In the event of a malpractice claim or lawsuit, the Oman Medical Association provides to their members a professional law firm to help manage their defense in the judiciary system. In addition, all government healthcare workers are covered by the national compensation fund for any financial liability which may arise from such medical malpractice claims. Similarly, the MOH mandates that all private healthcare institutions have liability insurance to protect their healthcare workers against malpractice litigation. There is a paucity of studies documenting medical malpractice litigation in the Arabian Gulf countries, and Oman is no exception. Over time, Omani law has established policies related to the regulation of health services and, when its integrity is compromised, the process of handling malpractice. According to the MOH, medical liabilities that healthcare professionals may be associated with include (i) penal, (ii) civil, (iii) disciplinary, and (iv) administrative. With such a broad perception, liability is regulated by several laws and regulations that have evolved to stay abreast of best practices . According to Al-Azri, the basic statute of Oman, promulgated by Royal Decree 101/96, largely codified the legal system in Oman . In 2019, Oman adopted Royal Decree 75/2019, a new law that guides the practice and professional ethics of medicine and related health professions, bringing Omani healthcare legislation in line with international standards . To improve the quality of medical practice and reduce the incidence of medical malpractice litigation, it is important to understand the patterns and characteristics of existing malpractice claims, which can provide further insight into litigious errors in clinical practice. The challenges in establishing these include the paucity of reliable data on litigation processes and there are no studies that have conducted in-depth explorations of medical litigation in Oman. To fill this gap in the literature and lay the foundations for mechanisms for the prevention and mitigation of medical errors and design evidence-based strategies to reduce litigation, the present study aimed to conduct a 12-year retrospective review investigating the characteristics of malpractice claims, the outcomes decided by the HMC, and predictors of medical errors in cases registered with the HMC in Oman. 2.1 Setting During the period between 2010 and 2021, 1284 cases raised by Omani nationals and non-Omani expatriates were registered for investigation by the Higher Medical Committee (HMC), a committee established to assess and provide technical opinions on medical errors in cases that are submitted to the judicial system, including the public prosecution, or courts, as well as the MOH. The HMC plays a crucial role in evaluating whether medical errors have occurred and offering expert insight in healthcare-related legal proceedings. The medical specialties, the healthcare sector and the geographical locations involved in the claims were examined. Data was also collected about the referring institution for the investigation; courts, Public Prosecution, and MOH. Oman has 11 administrative provinces/governorates (muhafazah ): Muscat, Dhofar, Musandam, Buraymi, Dakhiliyah, North Batinah, South Batinah, South Sharqiyah, North Sharqiyah, Dhahirah, and Wusta. For theoretical reasons, this study has distinguished between those living in Muscat, which is the capital of Oman, and those living in the other remaining regional settings. The urban-rural dichotomy may influence the incidence of medical lawsuits, as the quality of medical care can be influenced by systemic issues such as access to resources, the adequacy of staff and standards of training and equipment. These factors can contribute to the risk of medical errors and increase the likelihood of medical lawsuits. Thus, places of residence were conveniently classified as ‘Muscat region’ or ‘other regions’. 2.2 Data collection and pathways to HMC The pathways to HMC are depicted graphically in . In Oman, the handling of medical litigation has evolved and there are currently three major committees to handle these cases. First, the Regional Medical Technical Committees (RMTC) consist of senior medical professionals representing various specialties, who are allowed to seek the assistance of other senior medical professionals as Technical Experts (TE) in their investigation. RMTC investigates medical disputes or complaints raised in their region by government or private health institutes. Second, the Central Technical Committee (CTC) is similar to the RMTC, as it is also made up of a group of senior physicians who are located at the MOH headquarters, where it reviews all cases investigated by the RMTC. Following the review, the CTC then advises whether the case should be closed or if it would benefit from a referral to the HMC for further investigation. Third, those cases suspected of having medical errors regardless of the outcome of the RMTC investigation are referred to the Higher Medical Committee (HMC) for further investigation. The HMC is made up of a group of senior physicians in multiple specialties (permanent members), and the committee can invite TE’s according to the case investigation requirements to decide whether there is a medical error or harm and outline the causal relationships. Currently, in Oman, there are several ways to file a complaint if medical malpractice is suspected. The complainant(s) can file the case through various outlets within the MOH, as shown in , where such cases are initially investigated by the RMTC. HMC directly receives all cases referred to the courts of law in case a complainant files the case through them. Additionally, HQ-MOH, which is the undersecretary of the health office at the MOH headquarters, also sends all cases suspected of medical errors to the HMC for further investigation. These cases are initially reviewed by the CTC and if they conclude that no medical error was made, then the case is closed at the MOH level and the complainant(s) are informed about the decision. However, if the complainant(s) are still not satisfied with the outcome, then they could file a case in the primary court for further HMC investigation. HMC cases registered and reported from January 1, 2010 to December 31, 2021 were reviewed and studied. In general, the HMC received 1284 complaints during this period and an investigation has been completed in 1048 of these cases, leading to the outcome of whether a medical error has occurred or not. The rest of the cases were still under investigation while this study was conducted. The data from these reports were collected by the investigators from April 1, 2021 to January 31, 2022 for further analysis. Documented data included the specialties of the medical personnel involved, selected demographic characteristics, region of occurrence, what institution referred the case, type of medical institution involved, time between complaint registration and litigation closure, and the result of the alleged injury as ‘medical error’ or ‘no medical error’. The registered cases were further classified by specialty as follows: (1) Internal medicine (Accident and Emergency, Gastroenterology, General practitioner / Family Medicine, Hematology, General Medicine, Nephrology, Oncology, Neurology and Urology); (2) Cardiology, (3) Dermatology; (4) Surgery (General Surgery, Neurosurgery, Cardiovascular Surgery, Anesthesiology, Hand Surgery, Pediatric Surgery, Spine Surgery, and Plastic Surgery); (5) Dental; (6) Ear, Nose and Throat (ENT); (7) Obstetrics and Gynecology; (8); Orthopedics; (9) Ophthalmology; (10) Pediatrics, and (11) Others. In addition to the data mentioned above, which is required for the analysis of anonymised HMC reports, the investigators did not have access to identifiable patient data such as medical records or personal information. 2.3 Statistical analysis Data were analyzed using SPSS version 25 statistical software (IBM SPSS Statistics). The study considered the results of the complete investigation of the HMC, which was classified as ‘medical error’ or ‘no error’, as a binary outcome variable. The year of registration, age, sex, nationality, region of residence of the complainants (Muscat vs. other regions), level of care provided by the health institutions involved with the complaints, number of sessions required to complete the investigation by HMC, and the waiting time to initiate the investigation were considered explanatory variables. Descriptive and inferential statistical techniques were used for data analysis. Frequency, percentage, mean, and standard deviation were used to describe the characteristics of the 1284 registered medical litigation cases, while bivariate analysis using cross-tabulation and the Chi-square test was used to analyze the differential effects of the explanatory variables on the outcome of 1048 investigated cases. Time trends in medical litigation reporting were analyzed using a line graph and the fitted simple linear regression model, assuming that the reported cases over the period follow a linear trend. To identify significant predictors of medical error, we employed multiple logistic regression analysis. A p-value <0.05 was considered statistically significant. 2.4 Ethical approval The study was approved by the Research and Ethical Review and Approval Committee of the MOH of Oman (MoH/CSR/21/24217). As this is a retrospective review of HMC reports, the committee waived the need of informed consent from the patient. During the period between 2010 and 2021, 1284 cases raised by Omani nationals and non-Omani expatriates were registered for investigation by the Higher Medical Committee (HMC), a committee established to assess and provide technical opinions on medical errors in cases that are submitted to the judicial system, including the public prosecution, or courts, as well as the MOH. The HMC plays a crucial role in evaluating whether medical errors have occurred and offering expert insight in healthcare-related legal proceedings. The medical specialties, the healthcare sector and the geographical locations involved in the claims were examined. Data was also collected about the referring institution for the investigation; courts, Public Prosecution, and MOH. Oman has 11 administrative provinces/governorates (muhafazah ): Muscat, Dhofar, Musandam, Buraymi, Dakhiliyah, North Batinah, South Batinah, South Sharqiyah, North Sharqiyah, Dhahirah, and Wusta. For theoretical reasons, this study has distinguished between those living in Muscat, which is the capital of Oman, and those living in the other remaining regional settings. The urban-rural dichotomy may influence the incidence of medical lawsuits, as the quality of medical care can be influenced by systemic issues such as access to resources, the adequacy of staff and standards of training and equipment. These factors can contribute to the risk of medical errors and increase the likelihood of medical lawsuits. Thus, places of residence were conveniently classified as ‘Muscat region’ or ‘other regions’. The pathways to HMC are depicted graphically in . In Oman, the handling of medical litigation has evolved and there are currently three major committees to handle these cases. First, the Regional Medical Technical Committees (RMTC) consist of senior medical professionals representing various specialties, who are allowed to seek the assistance of other senior medical professionals as Technical Experts (TE) in their investigation. RMTC investigates medical disputes or complaints raised in their region by government or private health institutes. Second, the Central Technical Committee (CTC) is similar to the RMTC, as it is also made up of a group of senior physicians who are located at the MOH headquarters, where it reviews all cases investigated by the RMTC. Following the review, the CTC then advises whether the case should be closed or if it would benefit from a referral to the HMC for further investigation. Third, those cases suspected of having medical errors regardless of the outcome of the RMTC investigation are referred to the Higher Medical Committee (HMC) for further investigation. The HMC is made up of a group of senior physicians in multiple specialties (permanent members), and the committee can invite TE’s according to the case investigation requirements to decide whether there is a medical error or harm and outline the causal relationships. Currently, in Oman, there are several ways to file a complaint if medical malpractice is suspected. The complainant(s) can file the case through various outlets within the MOH, as shown in , where such cases are initially investigated by the RMTC. HMC directly receives all cases referred to the courts of law in case a complainant files the case through them. Additionally, HQ-MOH, which is the undersecretary of the health office at the MOH headquarters, also sends all cases suspected of medical errors to the HMC for further investigation. These cases are initially reviewed by the CTC and if they conclude that no medical error was made, then the case is closed at the MOH level and the complainant(s) are informed about the decision. However, if the complainant(s) are still not satisfied with the outcome, then they could file a case in the primary court for further HMC investigation. HMC cases registered and reported from January 1, 2010 to December 31, 2021 were reviewed and studied. In general, the HMC received 1284 complaints during this period and an investigation has been completed in 1048 of these cases, leading to the outcome of whether a medical error has occurred or not. The rest of the cases were still under investigation while this study was conducted. The data from these reports were collected by the investigators from April 1, 2021 to January 31, 2022 for further analysis. Documented data included the specialties of the medical personnel involved, selected demographic characteristics, region of occurrence, what institution referred the case, type of medical institution involved, time between complaint registration and litigation closure, and the result of the alleged injury as ‘medical error’ or ‘no medical error’. The registered cases were further classified by specialty as follows: (1) Internal medicine (Accident and Emergency, Gastroenterology, General practitioner / Family Medicine, Hematology, General Medicine, Nephrology, Oncology, Neurology and Urology); (2) Cardiology, (3) Dermatology; (4) Surgery (General Surgery, Neurosurgery, Cardiovascular Surgery, Anesthesiology, Hand Surgery, Pediatric Surgery, Spine Surgery, and Plastic Surgery); (5) Dental; (6) Ear, Nose and Throat (ENT); (7) Obstetrics and Gynecology; (8); Orthopedics; (9) Ophthalmology; (10) Pediatrics, and (11) Others. In addition to the data mentioned above, which is required for the analysis of anonymised HMC reports, the investigators did not have access to identifiable patient data such as medical records or personal information. Data were analyzed using SPSS version 25 statistical software (IBM SPSS Statistics). The study considered the results of the complete investigation of the HMC, which was classified as ‘medical error’ or ‘no error’, as a binary outcome variable. The year of registration, age, sex, nationality, region of residence of the complainants (Muscat vs. other regions), level of care provided by the health institutions involved with the complaints, number of sessions required to complete the investigation by HMC, and the waiting time to initiate the investigation were considered explanatory variables. Descriptive and inferential statistical techniques were used for data analysis. Frequency, percentage, mean, and standard deviation were used to describe the characteristics of the 1284 registered medical litigation cases, while bivariate analysis using cross-tabulation and the Chi-square test was used to analyze the differential effects of the explanatory variables on the outcome of 1048 investigated cases. Time trends in medical litigation reporting were analyzed using a line graph and the fitted simple linear regression model, assuming that the reported cases over the period follow a linear trend. To identify significant predictors of medical error, we employed multiple logistic regression analysis. A p-value <0.05 was considered statistically significant. The study was approved by the Research and Ethical Review and Approval Committee of the MOH of Oman (MoH/CSR/21/24217). As this is a retrospective review of HMC reports, the committee waived the need of informed consent from the patient. 3.1 Characteristics of malpractice claims A total of 1284 cases were registered in the HMC between 2010 and 2020, and data from all of these cases were included in the analysis of the incidence of the registration of medical malpractice claims and the characteristics associated with the cases and complainants. Of the 1284 registered cases, 621 cases (48.4%) were referred to the HMC by the MOH, 383 (29.8%) from the court system, and 280 (21.8%) from the Public Prosecution. 1098 of the 1284 cases were registered by Omani nationals and 186 by non-Omani expatriates. The average number of cases registered with the HMC for investigation between 2010 and 2013 was 83. However, there was a significant increase in 2014 when a total of 115 cases were registered in that year, followed by 157 cases in 2015. After a decline in cases registered per year after 2015, the number of cases peaked in 2021 with 150 cases registered, as shown in . The average number of medical malpractice cases filed by the HMC between 2010 and 2013 is 83 cases per year, while the average number of registered cases between 2018 and 2021 is 119.75 cases per year, representing a 44% increase in caseload. The fitted trend line showed a general upward trend in cases based on medical malpractice . Four of the most commonly involved specialties involved in HMC investigations were Internal Medicine, Surgery, Obstetrics and Gynecology, and Orthopedics, and displays the number of cases registered in each specialty by the HMC by year of registration . The distribution of medical malpractice cases by reporting month indicates that there is a seasonal variation in the filing of cases for the HMC investigation in Oman . The rate of referral of cases to HMC for investigation varies from 5.0% in November to 11.0% in April. January to April show a higher rate of registration, which represents 39.0% of the total reported cases. shows the characteristics of the 1,284 HMC cases during the study period. There was almost no difference in the sex distribution of the registered cases, as half of them involved male complainants and the other half involved female complainants. The average age of the complainants was 32.0 years. The majority (70%) of the registered cases were received from adult complainants between 18 and 59 years of age, and approximately half of the cases (51%) were complainants between 18 and 39 years of age. About 86% of the litigations were brought by Omani complainants compared to 14.0% of non-Omani complainants. Slightly less than half (48.4%) of the registered cases were received through the MOH. The cases received from the Primary Courts (PC) represent 29.8% of the total cases, and those received from the Public Prosecution represent 21.8%. It should be noted that more than two-thirds (68.4%) of medical malpractice lawsuits were brought against public hospitals/MOH, while 26.7% of cases were brought against the private health sector, as shown in . The locations of the health institutions involved in the litigations showed that the currently defined urban region (Muscat region) generated 42.8% of the complaints. The remaining 57.2% of the cases occurred in the other 10 collective regions. The distribution of medical litigations in different medical specialties showed that obstetrics and gynecology was the specialty most frequently involved in the cases raised (20.1%), closely followed by internal medicine and associated subspecialties (19.7%), surgery (17.6%), and orthopedics (13.8%). Medical malpractice litigation claims were found to be low for the specialties of dermatology (1.6%), cardiology (2.4%), ENT (2.4%), and ophthalmology (4.7%). 3.2 The outcome of HMC investigations The 1048 cases of litigation with HMC investigations completed were analyzed to assess potentially significant background characteristics associated with a final outcome of ‘medical error’ or ‘no medical error’. Of the 1048 cases, 495 cases were referred to the HMC by the MOH, 306 from the court system and 247 from the Public Prosecution. More than two-thirds (68.0%) of the HMC-investigated cases were completed within one investigative session, while 23.4% of the cases were investigated in two sessions, and 6.7% needed three or more investigative sessions, as shown in . The average waiting time to initiate the HMC investigation was 10.2 months. The waiting time was less than 6 months for about a fifth (19.6%) of the investigated cases, while it was 6–11 months for 41% of the cases, and it took 18 months or more for about 9% of the cases . The HMC investigation found no medical errors in almost 51% of the cases. presents the differentials of medical error in the selected characteristics with a p-value for the Chi-square test. Age, sex, region, and waiting time to initiate the HMC investigation did not show significant differences in the rate of observed medical errors. Medical error was found to be more prevalent among non-Omani nationals than among Omani nationals (58.6 vs. 47.9%). The error rate was found to be higher in the urban region than in other regions (52% vs. 48%). The rate of medical error increased with the number of sessions required to complete the HMC investigation. The error rate was also found to be higher among the MOH-referred cases (56.4%) and lower among the court system-referred cases (35.3%). The error rate was higher for cases related to private hospitals rather than those of public hospitals under the MOH (57.6% vs. 47.2%) . Although the dermatology specialty had the lowest number of cases registered with them, the conclusions of the HMC investigations indicated that the highest error rate (76.9%) of errors was associated with the dermatology specialty. However, medical errors were found to have occurred in only about 50% of the cases involving the obstetrics and gynecology specialty, although the highest number of cases were filed against this specialty. The rate of medical errors was found to be the lowest among cases involving cardiology specialty. 3.3 Predictors of medical errors The results of the multiple logistic regression analysis of medical errors, as presented in , identified nationality (i.e. Omani vs. non-Omani), the institution that referred the litigation cases to HMC, the type of health institutions involved with the cases, the number of investigative sessions, and waiting time to initiate the HMC investigation as significant predictors of medical errors. The odds of medical error were found to be 42% lower among Omani nationals compared to their non-Omani expatriate counterparts (AOR = 0.581, 95% CI: 0.387–0.874). The odds of medical error were 65% lower among the cases referred to the HMC by the court system than among the cases referred by the Public Prosecutor or the MOH (AOR = 0.349, 95% CI 0.236–0.518). The risk of medical error was found to be approximately 1.5 times higher among cases related to private health institutions than among cases related to public health institutions under the MOH (AOR = 1.444, 95% CI: 1.017–2.168). The number of sessions required to complete the HMC investigations showed a significant positive association with medical errors, as the odds of medical errors increased with the number of sessions. The risk of medical errors was found to be almost four times higher among the cases for which the investigation was completed in three or more sessions (AOR = 3.993, 95% CI: 2.233–7.137). The waiting time to initiate the HMC investigation was found to have a negative association with medical errors. For example, the risk of medical errors was 1.8 times higher in cases with a waiting time of less than 6 months compared to cases with a waiting time of 18 months or more (AOR = 1.772, 95% CI: 1.026–3.062). The specialty involved appeared to be a significant predictor of medical errors. Compared to internal medicine, specialties such as dermatology (AOR = 4.580, 95% CI: 1.149–18.249), ENT (AOR = 3.629, 95% CI: 1.407–9.357), dental (AOR = 2.091, (5% CI: 1.045–4.185) and surgery (AOR = 1.692, 95% CI: 1.090–2.626) had significantly higher odds of medical errors. A total of 1284 cases were registered in the HMC between 2010 and 2020, and data from all of these cases were included in the analysis of the incidence of the registration of medical malpractice claims and the characteristics associated with the cases and complainants. Of the 1284 registered cases, 621 cases (48.4%) were referred to the HMC by the MOH, 383 (29.8%) from the court system, and 280 (21.8%) from the Public Prosecution. 1098 of the 1284 cases were registered by Omani nationals and 186 by non-Omani expatriates. The average number of cases registered with the HMC for investigation between 2010 and 2013 was 83. However, there was a significant increase in 2014 when a total of 115 cases were registered in that year, followed by 157 cases in 2015. After a decline in cases registered per year after 2015, the number of cases peaked in 2021 with 150 cases registered, as shown in . The average number of medical malpractice cases filed by the HMC between 2010 and 2013 is 83 cases per year, while the average number of registered cases between 2018 and 2021 is 119.75 cases per year, representing a 44% increase in caseload. The fitted trend line showed a general upward trend in cases based on medical malpractice . Four of the most commonly involved specialties involved in HMC investigations were Internal Medicine, Surgery, Obstetrics and Gynecology, and Orthopedics, and displays the number of cases registered in each specialty by the HMC by year of registration . The distribution of medical malpractice cases by reporting month indicates that there is a seasonal variation in the filing of cases for the HMC investigation in Oman . The rate of referral of cases to HMC for investigation varies from 5.0% in November to 11.0% in April. January to April show a higher rate of registration, which represents 39.0% of the total reported cases. shows the characteristics of the 1,284 HMC cases during the study period. There was almost no difference in the sex distribution of the registered cases, as half of them involved male complainants and the other half involved female complainants. The average age of the complainants was 32.0 years. The majority (70%) of the registered cases were received from adult complainants between 18 and 59 years of age, and approximately half of the cases (51%) were complainants between 18 and 39 years of age. About 86% of the litigations were brought by Omani complainants compared to 14.0% of non-Omani complainants. Slightly less than half (48.4%) of the registered cases were received through the MOH. The cases received from the Primary Courts (PC) represent 29.8% of the total cases, and those received from the Public Prosecution represent 21.8%. It should be noted that more than two-thirds (68.4%) of medical malpractice lawsuits were brought against public hospitals/MOH, while 26.7% of cases were brought against the private health sector, as shown in . The locations of the health institutions involved in the litigations showed that the currently defined urban region (Muscat region) generated 42.8% of the complaints. The remaining 57.2% of the cases occurred in the other 10 collective regions. The distribution of medical litigations in different medical specialties showed that obstetrics and gynecology was the specialty most frequently involved in the cases raised (20.1%), closely followed by internal medicine and associated subspecialties (19.7%), surgery (17.6%), and orthopedics (13.8%). Medical malpractice litigation claims were found to be low for the specialties of dermatology (1.6%), cardiology (2.4%), ENT (2.4%), and ophthalmology (4.7%). The 1048 cases of litigation with HMC investigations completed were analyzed to assess potentially significant background characteristics associated with a final outcome of ‘medical error’ or ‘no medical error’. Of the 1048 cases, 495 cases were referred to the HMC by the MOH, 306 from the court system and 247 from the Public Prosecution. More than two-thirds (68.0%) of the HMC-investigated cases were completed within one investigative session, while 23.4% of the cases were investigated in two sessions, and 6.7% needed three or more investigative sessions, as shown in . The average waiting time to initiate the HMC investigation was 10.2 months. The waiting time was less than 6 months for about a fifth (19.6%) of the investigated cases, while it was 6–11 months for 41% of the cases, and it took 18 months or more for about 9% of the cases . The HMC investigation found no medical errors in almost 51% of the cases. presents the differentials of medical error in the selected characteristics with a p-value for the Chi-square test. Age, sex, region, and waiting time to initiate the HMC investigation did not show significant differences in the rate of observed medical errors. Medical error was found to be more prevalent among non-Omani nationals than among Omani nationals (58.6 vs. 47.9%). The error rate was found to be higher in the urban region than in other regions (52% vs. 48%). The rate of medical error increased with the number of sessions required to complete the HMC investigation. The error rate was also found to be higher among the MOH-referred cases (56.4%) and lower among the court system-referred cases (35.3%). The error rate was higher for cases related to private hospitals rather than those of public hospitals under the MOH (57.6% vs. 47.2%) . Although the dermatology specialty had the lowest number of cases registered with them, the conclusions of the HMC investigations indicated that the highest error rate (76.9%) of errors was associated with the dermatology specialty. However, medical errors were found to have occurred in only about 50% of the cases involving the obstetrics and gynecology specialty, although the highest number of cases were filed against this specialty. The rate of medical errors was found to be the lowest among cases involving cardiology specialty. The results of the multiple logistic regression analysis of medical errors, as presented in , identified nationality (i.e. Omani vs. non-Omani), the institution that referred the litigation cases to HMC, the type of health institutions involved with the cases, the number of investigative sessions, and waiting time to initiate the HMC investigation as significant predictors of medical errors. The odds of medical error were found to be 42% lower among Omani nationals compared to their non-Omani expatriate counterparts (AOR = 0.581, 95% CI: 0.387–0.874). The odds of medical error were 65% lower among the cases referred to the HMC by the court system than among the cases referred by the Public Prosecutor or the MOH (AOR = 0.349, 95% CI 0.236–0.518). The risk of medical error was found to be approximately 1.5 times higher among cases related to private health institutions than among cases related to public health institutions under the MOH (AOR = 1.444, 95% CI: 1.017–2.168). The number of sessions required to complete the HMC investigations showed a significant positive association with medical errors, as the odds of medical errors increased with the number of sessions. The risk of medical errors was found to be almost four times higher among the cases for which the investigation was completed in three or more sessions (AOR = 3.993, 95% CI: 2.233–7.137). The waiting time to initiate the HMC investigation was found to have a negative association with medical errors. For example, the risk of medical errors was 1.8 times higher in cases with a waiting time of less than 6 months compared to cases with a waiting time of 18 months or more (AOR = 1.772, 95% CI: 1.026–3.062). The specialty involved appeared to be a significant predictor of medical errors. Compared to internal medicine, specialties such as dermatology (AOR = 4.580, 95% CI: 1.149–18.249), ENT (AOR = 3.629, 95% CI: 1.407–9.357), dental (AOR = 2.091, (5% CI: 1.045–4.185) and surgery (AOR = 1.692, 95% CI: 1.090–2.626) had significantly higher odds of medical errors. The prevalence of medical errors varies widely throughout the world and is difficult to accurately quantify due to a lack of standardisation in reporting and defining what constitutes a medical error . Despite the lack of taxonomy of medical errors, there is evidence to suggest that they are a significant and widespread problem that affects millions of people every year. In the United States, the Institute of Medicine Committee on Quality of Health Care estimated that medical errors cause between 44,000 and 98,000 deaths each year . More recent estimates suggest that the number of deaths due to medical errors may be even higher, with one study putting the figure at 251,000 deaths per year . In the United Kingdom, the National Health Service estimated that there are approximately 250,000 adverse events (including medical errors) in the National Health Service each year, causing harm to about 100,000 patients . In Australia, a 2016 study estimated that medical errors can cause up to 17,000 deaths each year . The prevalence of lawsuits against healthcare workers in Oman and its neighboring Gulf Cooperation Council (GCC) countries has received little attention. In recent years, it has been widely recognised that the healthcare sector in GCC countries has been rapidly expanding and modernising, and this growth has been correlated with the increase in the number of medical malpractice cases. However, the legal framework for medical malpractice in the GCC countries is still evolving, and the procedures for filing and settling lawsuits can be complex. With this background, this study was conducted to describe the characteristics of malpractice claims, the results of the medical liability committee, and the predictors of medical errors in Oman. 4.1 Characteristics of malpractice targets Morbidity and mortality are significantly affected by medical errors. Medical errors account for 10% of all deaths in the United States, making them the third leading cause of death, although there are differing opinions regarding this statistic . However, medical mistakes can have major consequences, and patient safety policies should be improved to mitigate and protect against them. The number of malpractice litigation cases in Oman filed in the HMC fluctuated during the study period, with a total of 1284 cases registered between 2010 and 2021. The number of registered cases peaked twice, in 2015 and 2021. In 2015, there were changes to the internal referral system from MOH to HMC, which is considered the main cause of the surge in cases during that year. Following this, a short decreasing trend was observed until 2019, leading to another surge in registered cases that peaked in 2021. This second surge was related to the reduced functioning of the HMC during the onset of the COVID-19 pandemic, leading to a backlog of cases. However, there has been an overall increase in the filing of medical malpractice cases in Oman during the study period compared to previous years, with an increase of 31% that mirrors international trends . The increase could be related to multiple factors, such as the growth of the Omani population, the increase in the number of medical facilities, healthcare providers, and the complexity of medical and surgical interventions. Increased litigation could also be attributed to increased awareness within the community of the standards of medical care, medical errors, and the rights of patients to complain and litigate against healthcare professionals . Unlike previous generations, Oman now has access to high levels of Internet connectivity, and social networks have allowed the discussion of topics that were once hidden or discouraged . Oman has generally triumphed over the traditional enemy of health in developing countries, namely communicable diseases. However, a consequence of the recent rapid socio-economic changes in Oman is the emergence of non-communicable diseases. Thus, according to Al-Mandhari et al , ‘the existing model of health services in the country, top-down, professionally driven and cure-oriented, is increasingly unable to deal with this new assortment of health problems” (pp. 319). It remains to be established whether the rising tide of non-communicable diseases has caused dissatisfaction with healthcare services in Oman, which in turn could fuel litigation for alleged medical malpractice. It is worth noting that, while there is an increase in medical litigation in Oman, the trend captured in the present study appears to be low compared to international statistics, although such a conclusion may be premature since the cited comparison study is based on primary care rather than general national healthcare trends . However, several factors may contribute to the lower rate of medical litigation in Arab countries, including limited access to legal services, lack of regulatory frameworks, and cultural attitudes that discourage patients from questioning healthcare providers’ decisions or holding them responsible for adverse outcomes . Therefore, more studies are warranted in these regions to lay the groundwork for culturally sensitive strategies to increase patients’ awareness of their rights. In terms of demographics of malpractice claims, the present cases were evenly distributed between male and female complainants. The majority of cases (67%) were raised by adult complainants aged 18–60 years, and almost half of them were 19–40 years with an average age of 32 years. This age group represents the structure of the Oman population, where most of the population is under 35 years of age . In terms of nationality, the majority of disputes (86%) were brought up by Omani complainants. This is not a surprising finding when considering that almost 95% of all visits to MOH health institutions are by Omanis . It should be noted that nearly 39% of the population of Oman is made up of expatriate workers from different parts of the world . The MOH received slightly less than half (48.4%) of the registered cases, while the Primary Courts (PC) received 28.5% and the Public Prosecution received 21.8%. More than half of medical malpractice lawsuits were filed against government MOH hospitals (68.4%), while 26.7% were brought against the private health sector. The MOH provides healthcare services to most of the population of Oman, with more than 51 MOH hospitals compared to the 4 other non-MOH government hospitals and the 27 private hospitals . Complex cases are generally not handled by private healthcare institutions and, in general, are referred to MOH or tertiary level hospitals due to limited facilities. The urban region (Muscat) represented 42.8% of the complaints, with the rest of the individual governorates receiving fewer complaints. This implies that the presently defined urban region, i.e. Muscat alone, bore the burden of the highest number of litigations than any other individual region. This may be explained by the high population density and the increasing number of government and private healthcare facilities concentrated in Muscat. This includes several tertiary care centers, which generally deal with complex cases that require higher levels of interventions that carry a higher risk of complications or adverse outcomes. The lowest number of litigations was received from those regions where the population density is the lowest in Oman. Most of the lawsuits were brought against surgical specialties (obstetrics and gynecology, orthopedics, and general surgery). In the United States, Jena et al analysed the 1992–2005 misconduct data of all physicians covered by professional liability insurance companies (n = 40,916) based on 25 specialty fields . The percentage of physicians who encounter a claim every year varied between different medical specialties. The percentage ranged from 19.1% for neurosurgery, 18.9% for pulmonary and cardiovascular surgery, 15.3% for general surgery, 5.2% for family medicine, 3.1% for children and 2.6% for psychiatry. In Italy, Bolcato et al assessed medical professional liability in tertiary hospitals from 2003 to 2019 . The claims were classified as follows: 37% were related to the surgical field, 17% were related to internal medicine, and 35% were related to emergency care. In terms of types of medical errors, compensation was granted in 30% of cases involving diagnostic errors, 26% of cases involving therapeutic errors, 47% of cases involving execution errors, and 55% of cases involving organizational deficiencies. In general, studies from different populations suggest that surgical specialties (obstetrics and gynecology, orthopedics, and general surgery) appear to be more liable for a malpractice claim or lawsuit . This is consistent with the findings of the present study. This could be explained by the high degree and level of intervention and the need for immediate and prompt decisions and actions in these specialties, which contribute to a higher level of stress and a lower quality of care and the doctor-patient relationship. Stress and burnout have been established to be associated with medical errors . Stress and high workloads could also lead to poor quality medical records, which are important for after events and medical cases. Documentation of counseling sessions, informed consent forms, and other documentation are important evidence in the investigation. It is recommended to mention all potential complications with informed consent, not only to protect the physician but also to emotionally prepare the patient in the event of a complication. Therefore, such important pre-intervention counseling and consent are critical, as the patient might misinterpret the intervention complication(s) as a medical error instead . 4.2 Outcomes of the medical liability committee The HMC completed investigations for a total of 1048 cases during the study period, of which almost half of the cases did not have medical errors. The waiting time to initiate the investigation for the cases referred to HMC was variable, ranging from less than six months to more than 18 months of waiting time from the time of case registration. An explanation for the longer duration may be partially related to the frequency of HMC meetings in the initial period of the study (before 2015), when it used to take longer to finalize the cases. Additionally, the logistic issue of getting complete medical records from all hospitals involved in the case used to take longer. However, this waiting period has decreased in recent years, as the HMC has undergone internal arrangements to speed up the process of investigating and finalising their reports within three months, as mandated by a new law in the country that came into effect in 2019 unless the HMC requests an extension. At the time of case registration, the HMC prioritises the investigation process depending on the nature of the case referred, including the institution that referred the case. For example, cases referred from the court and Public Prosecution get the top priority, followed by those referred from MOH. Additionally, cases with high suspicion of medical error, especially when death is involved, are prioritised for expedited HMC investigation. When non-Omani nationality complainants filed a medicolegal case, the chances of confirming that the error occurred were greater than among those of Omani nationals. One possible explanation may be differences in how non-Omani perceive medical errors compared to nationals. The threshold for raising complaints may be lower among the Omani nationals, who likely have less reason to be cautious about the lengthy and tedious process compared to non-nationals. Furthermore, language barriers between patients and healthcare professionals have been found to lead to miscommunications, as well as to decrease the quality of healthcare and patient satisfaction. This may have been a possible contributing factor in cases of medical error registered by expatriate patients who were not fluent in English or Arabic . During the investigation process, the HMC conducts investigative sessions to review medical records, meet the complainant(s), and interview the involved staff in the presence of HMC members and TE’s. Each session can last up to four hours. The investigation was carried out in one investigative session in 68% of the cases. The more HMC sessions that occurred, the higher the probability of confirming errors. This could be due to the complexity of the case and the involvement of more staff, as well as the HMC’s conscientiousness during the investigation to reach an accurate final decision. Other reasons for conducting more than one investigative session vary between cases, such as missing medical records, the presence of new staff involved in the investigation, or the identification of new information that skewed the direction of the investigation. This process can be expedited by applying an electronic system that links national electronic health records with HMC. Cases referred to the MOH and public prosecutors have a higher chance of medical error than cases referred to the court. The cases referred to the court are not filtered by the local technical committee compared to those referred from MOH and Public Prosecution, which go through the RMTC and CTC and therefore have a higher potential for medical error. Similar observations were made for cases referred to private sector hospitals, which leads to the conclusion that they bear a higher burden of medical errors compared to MOH hospitals. The frequency of medical malpractice cases in the dermatology specialty was low. However, the results of the HMC investigation found the highest medical error rate among these cases. Most medicolegal litigation in dermatology was raised due to the negative outcome of aesthetic surgery and procedures, mainly in private practice. However, despite the highest rate of complaints against obstetric and gynecological services, only 50% of the cases investigated have confirmed the presence of medical malpractice. Finally, the number of cases related to the cardiology specialty was relatively low and the confirmation of medical errors in these cases was also less likely. To date, this is the first study of its kind to emerge from emerging economies of the GCC. The present data suggest that several factors can predict medical errors, including the complainant’s nationality (i.e. Omani vs. non-Omani), the referring institution, the type of health institution involved, the specialty in question, the number of HMC investigation sessions, and the waiting time for the investigation to be initiated. A negative correlation was found between the waiting time to complete the investigation and the HMC outcome of the medical error. This analysis also indicates some areas that would require vigilance to reduce court cases. Dermatology, dental, and surgical specialties have a higher risk of concluding medical errors than internal medicine. This implies that specialties that involve regular intervention tend to accrue higher litigation rates than other specialties such as internal medicine and its associated specialties. Malpractice claims analysis is a commonly employed method to identify areas for improvement in patient safety and quality improvement. In general, the study findings provide information that can guide healthcare institutions, policymakers, and professionals in implementing preventive measures. Addressing the identified factors associated with malpractice claims could be used to prevent medical errors and minimize the burden of medical malpractice litigation. 4.3 Limitations Although this study on medical malpractice litigation in Oman provides valuable information, it is important to consider some limitations that can affect the interpretation of the findings. First , the present study is retrospective and relies on existing data that may not have been collected with the specific research question in mind. Similarly, retrospective studies are observational and therefore cannot establish definitive causality between variables. Second , the number of cases investigated by the HMC does not represent all medical litigations or complaints about medical errors in Oman. There were complaints investigated by the RMTC that were not referred to the HMC as the cases were either deemed to have no medical error at the RMTC level or for which the complainant(s) did not wish to pursue the claim further. Related to this, the study relies on the availability of documented medical malpractice cases. It is possible that not all cases were officially reported and documented, which in turn may have potentially led to reporting bias. Cases with less severe consequences or those settled outside of the legal system may be under-represented. The results presented here are only for the registered cases for which the HMC completed its investigation during the study period (82% of the cases), and therefore 18% of the registered cases that are currently under investigation were not included in the analysis of the results. Third , the present study focuses on cases of medical malpractice registered with the Higher Medical Committee (HMC) in Oman over a 12-year period. Findings may not be generalizable to other regions or countries with different health systems, legal frameworks, or cultural contexts. Therefore, caution is needed when extrapolating the results beyond Oman. Fourth , in the present analysis, seasonality appears to have had an impact on registered cases. During the summer months, people may have chosen to travel abroad for medical care or for personal reasons. This could have resulted in a decrease in healthcare utilization and fewer visits to health facilities. As a result, medical errors that did occur could be under-reported or unnoticed if individuals delayed seeking medical attention or received care outside of Oman. This would require further scrutiny, since this could be an artefact of data collection. Finally , medical recordkeeping and litigation are nascent in the country. In fact, data cleaning was essential in the present analysis to remove what was deemed spurious outliers. In addition to being limited by being a retrospective study, to increase accountability in healthcare in Oman, a more vigilant mechanism is needed to track the typology and outcome of medical litigation. The question remains whether the current observed trend could be the tip of the iceberg. Related to this, the study identifies certain predictors of medical errors, such as the complainant’s nationality and the type of health institution involved. However, it is important to recognise that there may be additional confounding factors not considered in the analysis that could influence the results. These factors could include socioeconomic status, educational level, or healthcare access disparities. Due to the aforementioned limitation, it is worth conducting a longitudinal analysis to track trends and patterns of medical malpractice litigation in Oman beyond the 12 years covered in the study. This would discern whether the observed upsurge in recent years persistently escalates, stabilizes or recedes, as well as aid in identifying any trends over time. Comparative studies conducted between Oman and other GCC countries are also warranted, since the regions have similar health systems and cultural contexts. Such studies can provide information on similarities and differences in trends, outcomes, and predictors of medical malpractice, allowing cross-learning and identification of best practices. Finally, if the current trend observed would appear to be valid, then the contributing factors to medical errors and the ethical and legal dimensions of medical malpractice could be explored with qualitative research, such as through interviews or focus groups with patients, healthcare workers, health policy professionals, and legal experts. This would provide rich insight into experiences, perspectives, and contextual factors that may not be captured by a quantitative analysis alone. Morbidity and mortality are significantly affected by medical errors. Medical errors account for 10% of all deaths in the United States, making them the third leading cause of death, although there are differing opinions regarding this statistic . However, medical mistakes can have major consequences, and patient safety policies should be improved to mitigate and protect against them. The number of malpractice litigation cases in Oman filed in the HMC fluctuated during the study period, with a total of 1284 cases registered between 2010 and 2021. The number of registered cases peaked twice, in 2015 and 2021. In 2015, there were changes to the internal referral system from MOH to HMC, which is considered the main cause of the surge in cases during that year. Following this, a short decreasing trend was observed until 2019, leading to another surge in registered cases that peaked in 2021. This second surge was related to the reduced functioning of the HMC during the onset of the COVID-19 pandemic, leading to a backlog of cases. However, there has been an overall increase in the filing of medical malpractice cases in Oman during the study period compared to previous years, with an increase of 31% that mirrors international trends . The increase could be related to multiple factors, such as the growth of the Omani population, the increase in the number of medical facilities, healthcare providers, and the complexity of medical and surgical interventions. Increased litigation could also be attributed to increased awareness within the community of the standards of medical care, medical errors, and the rights of patients to complain and litigate against healthcare professionals . Unlike previous generations, Oman now has access to high levels of Internet connectivity, and social networks have allowed the discussion of topics that were once hidden or discouraged . Oman has generally triumphed over the traditional enemy of health in developing countries, namely communicable diseases. However, a consequence of the recent rapid socio-economic changes in Oman is the emergence of non-communicable diseases. Thus, according to Al-Mandhari et al , ‘the existing model of health services in the country, top-down, professionally driven and cure-oriented, is increasingly unable to deal with this new assortment of health problems” (pp. 319). It remains to be established whether the rising tide of non-communicable diseases has caused dissatisfaction with healthcare services in Oman, which in turn could fuel litigation for alleged medical malpractice. It is worth noting that, while there is an increase in medical litigation in Oman, the trend captured in the present study appears to be low compared to international statistics, although such a conclusion may be premature since the cited comparison study is based on primary care rather than general national healthcare trends . However, several factors may contribute to the lower rate of medical litigation in Arab countries, including limited access to legal services, lack of regulatory frameworks, and cultural attitudes that discourage patients from questioning healthcare providers’ decisions or holding them responsible for adverse outcomes . Therefore, more studies are warranted in these regions to lay the groundwork for culturally sensitive strategies to increase patients’ awareness of their rights. In terms of demographics of malpractice claims, the present cases were evenly distributed between male and female complainants. The majority of cases (67%) were raised by adult complainants aged 18–60 years, and almost half of them were 19–40 years with an average age of 32 years. This age group represents the structure of the Oman population, where most of the population is under 35 years of age . In terms of nationality, the majority of disputes (86%) were brought up by Omani complainants. This is not a surprising finding when considering that almost 95% of all visits to MOH health institutions are by Omanis . It should be noted that nearly 39% of the population of Oman is made up of expatriate workers from different parts of the world . The MOH received slightly less than half (48.4%) of the registered cases, while the Primary Courts (PC) received 28.5% and the Public Prosecution received 21.8%. More than half of medical malpractice lawsuits were filed against government MOH hospitals (68.4%), while 26.7% were brought against the private health sector. The MOH provides healthcare services to most of the population of Oman, with more than 51 MOH hospitals compared to the 4 other non-MOH government hospitals and the 27 private hospitals . Complex cases are generally not handled by private healthcare institutions and, in general, are referred to MOH or tertiary level hospitals due to limited facilities. The urban region (Muscat) represented 42.8% of the complaints, with the rest of the individual governorates receiving fewer complaints. This implies that the presently defined urban region, i.e. Muscat alone, bore the burden of the highest number of litigations than any other individual region. This may be explained by the high population density and the increasing number of government and private healthcare facilities concentrated in Muscat. This includes several tertiary care centers, which generally deal with complex cases that require higher levels of interventions that carry a higher risk of complications or adverse outcomes. The lowest number of litigations was received from those regions where the population density is the lowest in Oman. Most of the lawsuits were brought against surgical specialties (obstetrics and gynecology, orthopedics, and general surgery). In the United States, Jena et al analysed the 1992–2005 misconduct data of all physicians covered by professional liability insurance companies (n = 40,916) based on 25 specialty fields . The percentage of physicians who encounter a claim every year varied between different medical specialties. The percentage ranged from 19.1% for neurosurgery, 18.9% for pulmonary and cardiovascular surgery, 15.3% for general surgery, 5.2% for family medicine, 3.1% for children and 2.6% for psychiatry. In Italy, Bolcato et al assessed medical professional liability in tertiary hospitals from 2003 to 2019 . The claims were classified as follows: 37% were related to the surgical field, 17% were related to internal medicine, and 35% were related to emergency care. In terms of types of medical errors, compensation was granted in 30% of cases involving diagnostic errors, 26% of cases involving therapeutic errors, 47% of cases involving execution errors, and 55% of cases involving organizational deficiencies. In general, studies from different populations suggest that surgical specialties (obstetrics and gynecology, orthopedics, and general surgery) appear to be more liable for a malpractice claim or lawsuit . This is consistent with the findings of the present study. This could be explained by the high degree and level of intervention and the need for immediate and prompt decisions and actions in these specialties, which contribute to a higher level of stress and a lower quality of care and the doctor-patient relationship. Stress and burnout have been established to be associated with medical errors . Stress and high workloads could also lead to poor quality medical records, which are important for after events and medical cases. Documentation of counseling sessions, informed consent forms, and other documentation are important evidence in the investigation. It is recommended to mention all potential complications with informed consent, not only to protect the physician but also to emotionally prepare the patient in the event of a complication. Therefore, such important pre-intervention counseling and consent are critical, as the patient might misinterpret the intervention complication(s) as a medical error instead . The HMC completed investigations for a total of 1048 cases during the study period, of which almost half of the cases did not have medical errors. The waiting time to initiate the investigation for the cases referred to HMC was variable, ranging from less than six months to more than 18 months of waiting time from the time of case registration. An explanation for the longer duration may be partially related to the frequency of HMC meetings in the initial period of the study (before 2015), when it used to take longer to finalize the cases. Additionally, the logistic issue of getting complete medical records from all hospitals involved in the case used to take longer. However, this waiting period has decreased in recent years, as the HMC has undergone internal arrangements to speed up the process of investigating and finalising their reports within three months, as mandated by a new law in the country that came into effect in 2019 unless the HMC requests an extension. At the time of case registration, the HMC prioritises the investigation process depending on the nature of the case referred, including the institution that referred the case. For example, cases referred from the court and Public Prosecution get the top priority, followed by those referred from MOH. Additionally, cases with high suspicion of medical error, especially when death is involved, are prioritised for expedited HMC investigation. When non-Omani nationality complainants filed a medicolegal case, the chances of confirming that the error occurred were greater than among those of Omani nationals. One possible explanation may be differences in how non-Omani perceive medical errors compared to nationals. The threshold for raising complaints may be lower among the Omani nationals, who likely have less reason to be cautious about the lengthy and tedious process compared to non-nationals. Furthermore, language barriers between patients and healthcare professionals have been found to lead to miscommunications, as well as to decrease the quality of healthcare and patient satisfaction. This may have been a possible contributing factor in cases of medical error registered by expatriate patients who were not fluent in English or Arabic . During the investigation process, the HMC conducts investigative sessions to review medical records, meet the complainant(s), and interview the involved staff in the presence of HMC members and TE’s. Each session can last up to four hours. The investigation was carried out in one investigative session in 68% of the cases. The more HMC sessions that occurred, the higher the probability of confirming errors. This could be due to the complexity of the case and the involvement of more staff, as well as the HMC’s conscientiousness during the investigation to reach an accurate final decision. Other reasons for conducting more than one investigative session vary between cases, such as missing medical records, the presence of new staff involved in the investigation, or the identification of new information that skewed the direction of the investigation. This process can be expedited by applying an electronic system that links national electronic health records with HMC. Cases referred to the MOH and public prosecutors have a higher chance of medical error than cases referred to the court. The cases referred to the court are not filtered by the local technical committee compared to those referred from MOH and Public Prosecution, which go through the RMTC and CTC and therefore have a higher potential for medical error. Similar observations were made for cases referred to private sector hospitals, which leads to the conclusion that they bear a higher burden of medical errors compared to MOH hospitals. The frequency of medical malpractice cases in the dermatology specialty was low. However, the results of the HMC investigation found the highest medical error rate among these cases. Most medicolegal litigation in dermatology was raised due to the negative outcome of aesthetic surgery and procedures, mainly in private practice. However, despite the highest rate of complaints against obstetric and gynecological services, only 50% of the cases investigated have confirmed the presence of medical malpractice. Finally, the number of cases related to the cardiology specialty was relatively low and the confirmation of medical errors in these cases was also less likely. To date, this is the first study of its kind to emerge from emerging economies of the GCC. The present data suggest that several factors can predict medical errors, including the complainant’s nationality (i.e. Omani vs. non-Omani), the referring institution, the type of health institution involved, the specialty in question, the number of HMC investigation sessions, and the waiting time for the investigation to be initiated. A negative correlation was found between the waiting time to complete the investigation and the HMC outcome of the medical error. This analysis also indicates some areas that would require vigilance to reduce court cases. Dermatology, dental, and surgical specialties have a higher risk of concluding medical errors than internal medicine. This implies that specialties that involve regular intervention tend to accrue higher litigation rates than other specialties such as internal medicine and its associated specialties. Malpractice claims analysis is a commonly employed method to identify areas for improvement in patient safety and quality improvement. In general, the study findings provide information that can guide healthcare institutions, policymakers, and professionals in implementing preventive measures. Addressing the identified factors associated with malpractice claims could be used to prevent medical errors and minimize the burden of medical malpractice litigation. Although this study on medical malpractice litigation in Oman provides valuable information, it is important to consider some limitations that can affect the interpretation of the findings. First , the present study is retrospective and relies on existing data that may not have been collected with the specific research question in mind. Similarly, retrospective studies are observational and therefore cannot establish definitive causality between variables. Second , the number of cases investigated by the HMC does not represent all medical litigations or complaints about medical errors in Oman. There were complaints investigated by the RMTC that were not referred to the HMC as the cases were either deemed to have no medical error at the RMTC level or for which the complainant(s) did not wish to pursue the claim further. Related to this, the study relies on the availability of documented medical malpractice cases. It is possible that not all cases were officially reported and documented, which in turn may have potentially led to reporting bias. Cases with less severe consequences or those settled outside of the legal system may be under-represented. The results presented here are only for the registered cases for which the HMC completed its investigation during the study period (82% of the cases), and therefore 18% of the registered cases that are currently under investigation were not included in the analysis of the results. Third , the present study focuses on cases of medical malpractice registered with the Higher Medical Committee (HMC) in Oman over a 12-year period. Findings may not be generalizable to other regions or countries with different health systems, legal frameworks, or cultural contexts. Therefore, caution is needed when extrapolating the results beyond Oman. Fourth , in the present analysis, seasonality appears to have had an impact on registered cases. During the summer months, people may have chosen to travel abroad for medical care or for personal reasons. This could have resulted in a decrease in healthcare utilization and fewer visits to health facilities. As a result, medical errors that did occur could be under-reported or unnoticed if individuals delayed seeking medical attention or received care outside of Oman. This would require further scrutiny, since this could be an artefact of data collection. Finally , medical recordkeeping and litigation are nascent in the country. In fact, data cleaning was essential in the present analysis to remove what was deemed spurious outliers. In addition to being limited by being a retrospective study, to increase accountability in healthcare in Oman, a more vigilant mechanism is needed to track the typology and outcome of medical litigation. The question remains whether the current observed trend could be the tip of the iceberg. Related to this, the study identifies certain predictors of medical errors, such as the complainant’s nationality and the type of health institution involved. However, it is important to recognise that there may be additional confounding factors not considered in the analysis that could influence the results. These factors could include socioeconomic status, educational level, or healthcare access disparities. Due to the aforementioned limitation, it is worth conducting a longitudinal analysis to track trends and patterns of medical malpractice litigation in Oman beyond the 12 years covered in the study. This would discern whether the observed upsurge in recent years persistently escalates, stabilizes or recedes, as well as aid in identifying any trends over time. Comparative studies conducted between Oman and other GCC countries are also warranted, since the regions have similar health systems and cultural contexts. Such studies can provide information on similarities and differences in trends, outcomes, and predictors of medical malpractice, allowing cross-learning and identification of best practices. Finally, if the current trend observed would appear to be valid, then the contributing factors to medical errors and the ethical and legal dimensions of medical malpractice could be explored with qualitative research, such as through interviews or focus groups with patients, healthcare workers, health policy professionals, and legal experts. This would provide rich insight into experiences, perspectives, and contextual factors that may not be captured by a quantitative analysis alone. The present study examined medical malpractice litigation in Oman from 2010 to 2021, analyzing data accrued from HMC cases registered with the HMC for which the investigation had been completed according to the existing system. Most of the litigations were initiated by adult Omani complainants, were predominantly from the urban Muscat region, and were frequently related to public hospitals. The most common specialties involved in litigation were obstetrics and gynecology, internal medicine, surgery, and orthopedics. About half of appeals or grievances were dismissed due to lack of medical negligence or malpractice. The predictors of medical errors included nationality (i.e., Omani vs. non-Omani), the institution that referred the case, the specialty and type of health institution involved, the number of investigation sessions required and the waiting time for the start of the HMC investigation. Some of the associated factors, pending further scrutiny, have the potential to be utilised to design preventive measures against medical errors. An upward trend in the incidence of medical malpractice litigation was noted, however, the present research is marred by many potential confounders. More research with a more robust methodology is required to indicate the accuracy of this trend and to shed light on the contributing factors to medical malpractice and errors in Oman. The results of the present study are necessary to lay the groundwork for contemplating mechanisms to minimize the rates of medico-legal cases in Oman, as well as to increase literacy on medical errors amongst the public and healthcare workers. S1 Checklist STROBE statement—Checklist of items that should be included in reports of observational studies. (DOCX) Click here for additional data file.
Understanding Health Communication Through Google Trends and News Coverage for COVID-19: Multinational Study in Eight Countries
419c8b34-ed1b-466b-ae50-ac1dc6ef332b
8691414
Health Communication[mh]
In late December 2019, a cluster of patients with pneumonia of unknown etiology was reported in Wuhan, China . Soon after, a new type of coronavirus was identified as the pathogen causing this pneumonia , which was named COVID-19 by the World Health Organization (WHO) . As the number of COVID-19 infections continued to increase, the WHO declared COVID-19 a pandemic on March 11, 2020 . Globally, as of July 2020, there have been more than 10.3 million confirmed cases and more than half a million deaths in over 200 countries , which caused global supply chain disruptions during the COVID-19 pandemic . Therefore, the prevention and control of the epidemic require a great deal of urgency. Surveillance is an essential component of infectious disease control . Nevertheless, traditional public health surveillance of epidemic diseases is based on government-implemented data gathering, resulting in data that can take years to become available . Traditional laboratory monitoring is still used in most countries, but in recent years, some countries have tried to use internet search query data to assist traditional public health surveillance, such as Google Flu Trends (GFT) and Google Dengue Trends . In the future, various types of internet data, such as search data, will offer more possibilities for better disease prevention and control . Google Trends is one of the most popular open online tools for assessing data from public internet searches and has multiple advantages . Specifically, it collects real-time data automatically, and provides quantitative and qualitative data applied to the informatics research of various communicable and noncommunicable diseases . For example, Ginsberg et al employed Google to track influenza-like illness in a population. Ocampo et al were the first to use Google search queries in malaria surveillance. Glynn et al assessed the relationship between breast cancer awareness campaigns and internet search activity from 2004 to 2009 using Google Trends. All of the above research drew similar conclusions: Google Trends can supplement traditional public health surveillance and help us to better understand public response and sentiment to the pandemic. Moreover, Google Trends can help reveal the need for health-related information . In addition, news coverage of COVID-19 by mass media played an important role during the outbreak . As a source of information, news coverage can provide important information to the public and, in turn, guide people to form positive, healthy behaviors or prevent the development of unhealthy behaviors. News coverage influences the behaviors of the public by both direct and indirect routes: news content can directly influence the behavior of the recipients or indirectly influence interpersonal discussion and transmission of coverage content . For instance, the public’s online search behaviors for information about diseases increase during disease awareness months . Moreover, some researchers have noted that internet search behaviors and news coverage were relevant to traditional data monitoring, and the latter appeared to promote internet searches for health topics . In the area of public health , when there is an emerging pandemic, news media as a tool can inform the public about prevention and control strategies. On the other hand, news media can also have a negative side. For example, news coverage might not be based on expert assessments and may hold relatively independent views. Also, news coverage might cause public panic. Although newsworthiness is complex, analyzing internet data can help improve the effectiveness of public communication . In other words, news coverage plays an important role in health communication. Hence, acquiring available online data, including internet search query data and social media information, can provide novel insights for the prevention and control of COVID-19 . To date, only a few studies have focused on internet search data combined with news coverage data. This study, therefore, used Google query data, news coverage data, and new COVID-19 case data to understand health communication during the early stage of this epidemic. Overview In this study, we collected data from Google Trends, news coverage, and new COVID-19–related daily cases from January 1 to April 29, 2020 (120 days), which is considered the early period of the epidemic in eight countries: the United States, the United Kingdom, Canada, Singapore, Ireland, Australia, South Africa, and New Zealand. We then described different Google Trends search queries and news coverage trends in different countries to understand the situation of health communication, and we explored the connection between the above and the prevention and control of COVID-19 at the early epidemic stage. Data Collection Google Query Data Google Trends is one of the most popular online tools used to track internet hit search volumes. Users of Google Trends can obtain the search trend data of terms . Google Trends provides a relative search volume (RSV) to depict the popularity of a specific search term in a specific geographic area over a period of time. The value of RSV ranges from 0 to 100. A value of 0 means there was not enough data for this term, and a value of 100 represents the peak popularity for the term . Based on a previous study , symptoms, treatments and medical resources, measures, and the virus itself were the major topics covered by online media during the early period of the COVID-19 pandemic. Therefore, we selected “diseases,” “treatments and medical resources,” “symptoms and signs,” and “public measures” as search topics, and we used their terms as search terms. Also, due to the limited language of Google Trends, only English-speaking countries were included in this study . According to population size, we selected eight English-speaking countries for the study: the United States, the United Kingdom, Canada, Singapore, Ireland, Australia, South Africa, and New Zealand. RSV data for the above topics in these eight countries, between January 1 and April 29, 2020, were collected and then exported into CSV files. The topics and their query terms are shown in . News Coverage Data Meltwater is a platform that provides real-time monitoring of domestic and overseas news, and covers more than 300,000 online websites, news clients, and other news media . With wide geographical coverage, Meltwater provides rich news data from different countries. To compare and analyze the news media coverage on COVID-19, we selected news media from eight countries (ie, the United States, the United Kingdom, Canada, Singapore, Ireland, Australia, South Africa, and New Zealand) and searched the news coverage from January 1 to April 29, 2020, with “covid-19” or “coronavirus” as the keywords. New Case Data The number of new daily cases of COVID-19 was obtained from the WHO with surveillance data . Analytical Framework First, we used line graphs to show search trends for different topics in eight countries and attached the epidemic curves of new COVID-19 cases. We then assessed the most popular terms at the country level by comparing their search peaks to determine the characteristics of various terms in different countries. We then explored the reason for trend fluctuation of search query terms and the fluctuation impact on the prevention and control of COVID-19. Additionally, in Google Trends, the plus sign has the function of “OR” and can be used to connect multiple terms to form an overall term . Thus, we used “+” to integrate multiple terms in different topics into the overall term of the topic, and its RSV represents the overall RSV of the topic. For example, we used the RSV of “coronavirus + covid-19 + pneumonia” to represent the overall RSV of “diseases.” Second, we used the neighborhood average method to smooth the news coverage data . Then we used line charts to show news coverage longitudinal trends and identified the similarities and differences of news coverage between eight countries. Furthermore, to further discuss the relationship between news coverage and internet search queries, as well as the relationship between search queries and daily news, we summed the overall RSVs of the four topics to obtain the total RSV and attached it to the line chart along with the epidemic curve of new daily cases to more intuitively observe the changes of the three in the different countries. Moreover, we conducted time-lag correlation analysis between the overall RSVs of search queries for different topics and the number of new COVID-19 cases each day, as well as between the overall RSVs of search queries for different topics and the number of daily news items. The cross-correlation function of the “tseries” package from R software (version 4.0.5; The R Foundation) was used to compute time-lag correlations. In the analysis, a time lag between –17 and +17 days was used, and the Pearson correlation coefficient was used as the correlation measure. Finally, the interrupted time series analysis was used to evaluate the impact of the appearance of the first COVID-19 case on the four search terms of the topic “symptoms and signs.” Taking the date of the first COVID-19 case as the change point, we used the generalized least squares estimator to fit the segmented linear regression model to evaluate the change in the level and slope of the RSV after the first case was discovered. Also, the residual autocorrelation was tested using the Durbin-Watson test. All hypothesis tests used a significance level (α) of .05. In this study, we collected data from Google Trends, news coverage, and new COVID-19–related daily cases from January 1 to April 29, 2020 (120 days), which is considered the early period of the epidemic in eight countries: the United States, the United Kingdom, Canada, Singapore, Ireland, Australia, South Africa, and New Zealand. We then described different Google Trends search queries and news coverage trends in different countries to understand the situation of health communication, and we explored the connection between the above and the prevention and control of COVID-19 at the early epidemic stage. Google Query Data Google Trends is one of the most popular online tools used to track internet hit search volumes. Users of Google Trends can obtain the search trend data of terms . Google Trends provides a relative search volume (RSV) to depict the popularity of a specific search term in a specific geographic area over a period of time. The value of RSV ranges from 0 to 100. A value of 0 means there was not enough data for this term, and a value of 100 represents the peak popularity for the term . Based on a previous study , symptoms, treatments and medical resources, measures, and the virus itself were the major topics covered by online media during the early period of the COVID-19 pandemic. Therefore, we selected “diseases,” “treatments and medical resources,” “symptoms and signs,” and “public measures” as search topics, and we used their terms as search terms. Also, due to the limited language of Google Trends, only English-speaking countries were included in this study . According to population size, we selected eight English-speaking countries for the study: the United States, the United Kingdom, Canada, Singapore, Ireland, Australia, South Africa, and New Zealand. RSV data for the above topics in these eight countries, between January 1 and April 29, 2020, were collected and then exported into CSV files. The topics and their query terms are shown in . News Coverage Data Meltwater is a platform that provides real-time monitoring of domestic and overseas news, and covers more than 300,000 online websites, news clients, and other news media . With wide geographical coverage, Meltwater provides rich news data from different countries. To compare and analyze the news media coverage on COVID-19, we selected news media from eight countries (ie, the United States, the United Kingdom, Canada, Singapore, Ireland, Australia, South Africa, and New Zealand) and searched the news coverage from January 1 to April 29, 2020, with “covid-19” or “coronavirus” as the keywords. New Case Data The number of new daily cases of COVID-19 was obtained from the WHO with surveillance data . Google Trends is one of the most popular online tools used to track internet hit search volumes. Users of Google Trends can obtain the search trend data of terms . Google Trends provides a relative search volume (RSV) to depict the popularity of a specific search term in a specific geographic area over a period of time. The value of RSV ranges from 0 to 100. A value of 0 means there was not enough data for this term, and a value of 100 represents the peak popularity for the term . Based on a previous study , symptoms, treatments and medical resources, measures, and the virus itself were the major topics covered by online media during the early period of the COVID-19 pandemic. Therefore, we selected “diseases,” “treatments and medical resources,” “symptoms and signs,” and “public measures” as search topics, and we used their terms as search terms. Also, due to the limited language of Google Trends, only English-speaking countries were included in this study . According to population size, we selected eight English-speaking countries for the study: the United States, the United Kingdom, Canada, Singapore, Ireland, Australia, South Africa, and New Zealand. RSV data for the above topics in these eight countries, between January 1 and April 29, 2020, were collected and then exported into CSV files. The topics and their query terms are shown in . Meltwater is a platform that provides real-time monitoring of domestic and overseas news, and covers more than 300,000 online websites, news clients, and other news media . With wide geographical coverage, Meltwater provides rich news data from different countries. To compare and analyze the news media coverage on COVID-19, we selected news media from eight countries (ie, the United States, the United Kingdom, Canada, Singapore, Ireland, Australia, South Africa, and New Zealand) and searched the news coverage from January 1 to April 29, 2020, with “covid-19” or “coronavirus” as the keywords. The number of new daily cases of COVID-19 was obtained from the WHO with surveillance data . First, we used line graphs to show search trends for different topics in eight countries and attached the epidemic curves of new COVID-19 cases. We then assessed the most popular terms at the country level by comparing their search peaks to determine the characteristics of various terms in different countries. We then explored the reason for trend fluctuation of search query terms and the fluctuation impact on the prevention and control of COVID-19. Additionally, in Google Trends, the plus sign has the function of “OR” and can be used to connect multiple terms to form an overall term . Thus, we used “+” to integrate multiple terms in different topics into the overall term of the topic, and its RSV represents the overall RSV of the topic. For example, we used the RSV of “coronavirus + covid-19 + pneumonia” to represent the overall RSV of “diseases.” Second, we used the neighborhood average method to smooth the news coverage data . Then we used line charts to show news coverage longitudinal trends and identified the similarities and differences of news coverage between eight countries. Furthermore, to further discuss the relationship between news coverage and internet search queries, as well as the relationship between search queries and daily news, we summed the overall RSVs of the four topics to obtain the total RSV and attached it to the line chart along with the epidemic curve of new daily cases to more intuitively observe the changes of the three in the different countries. Moreover, we conducted time-lag correlation analysis between the overall RSVs of search queries for different topics and the number of new COVID-19 cases each day, as well as between the overall RSVs of search queries for different topics and the number of daily news items. The cross-correlation function of the “tseries” package from R software (version 4.0.5; The R Foundation) was used to compute time-lag correlations. In the analysis, a time lag between –17 and +17 days was used, and the Pearson correlation coefficient was used as the correlation measure. Finally, the interrupted time series analysis was used to evaluate the impact of the appearance of the first COVID-19 case on the four search terms of the topic “symptoms and signs.” Taking the date of the first COVID-19 case as the change point, we used the generalized least squares estimator to fit the segmented linear regression model to evaluate the change in the level and slope of the RSV after the first case was discovered. Also, the residual autocorrelation was tested using the Durbin-Watson test. All hypothesis tests used a significance level (α) of .05. to 4 depict the trends of a specific query topic by its associated query terms, accompanied by new daily cases in the eight countries studied. For the topic “diseases,” we used the search terms “coronavirus,” “covid-19,” and “pneumonia” . Regarding the term “coronavirus,” its RSV increased around January 20, 2020, with a small peak at the end of January 2020. Except for Singapore, the RSV of “coronavirus” in other countries all formed an obvious peak in mid to late March 2020. Regarding the term “covid-19,” its RSV began to increase on February 11, 2020, and generated the top search peak from late March to early April 2020; around April 2020, the RSV value of this term surpassed that of “coronavirus.” Compared to these two terms, the trend for “pneumonia” fluctuated very little between January and April 2020. shows the trends of the topic “treatments and medical resources,” including the query terms “ventilator,” “vaccine,” and “mask.” The term “mask” was the most searched term, followed by “vaccine” and “ventilator.” Regarding the term “mask,” there was one main search peak that occurred in April 2020 for all eight countries despite multiple spikes found in specific countries (ie, Singapore, Ireland, Australia, and New Zealand). Regarding the term “vaccine,” its RSV for most countries rose starting in March and generated several small spikes near mid-March 2020. shows the trends for the topic “symptoms and signs” related to COVID-19. Among its query terms, “fever” was the most searched term, followed by “cough,” “shortness of breath,” and “tiredness.” Regarding the terms “fever” and “cough,” their top search peaks were formed around mid-March 2020 for all countries except Singapore, slightly earlier than the peak of new daily cases. In Singapore, the search peaks of “fever” and “cough” appeared between late January and mid-February 2020. shows the trend for the topic “public measures,” using the query terms “quarantine,” “social distancing,” and “lockdown” during this study period. The RSV of “lockdown” was the highest, followed by “quarantine” and “social distancing.” For all these terms, their RSVs were very low before March 2020, and the RSVs of “quarantine” and “lockdown” increased and formed search peaks after mid-March 2020. News coverage trends related to COVID-19 are shown in . According to the neighborhood average method, we set 7 days as a base period to smooth the number of news coverage items. With the United States as an example, y 1 , y 2 ,..., y n were the true number of news coverage items from January 1 to April 29, 2020, where n=120. Therefore, the fitted value of news reports S t could be obtained by S t = ( y t–3 + y t–2 + y t–1 + y t +y t+1 + y t+2 + y t+3 ) / 7, where y t–3 , y t–2 , y t–1 represents the true number of news coverage items about 3 days, 2 days, and 1 day before day t , and y t+3 , y t+2 , y t+1 represents the true number of news coverage items about 3 days, 2 days, and 1 day after day t , where t =4,..., 117. Across eight countries, the number of news reports remained low before February 2020. From the end of January, the news report number gradually increased until the end of March 2020 and remained stable afterward. This trend was consistently observed in all countries, except the United States. In contrast, the coverage in the United States soared from around March 29, 2020, far outpacing that in any other country by nearly 300 times. Also, when comparing the trends of the total RSVs and news coverage, we identified three main patterns across the eight countries, which we have termed Singapore, the United States, and other country patterns. In Singapore, the trends of total RSVs formed two major peaks between late January and mid-February and between mid-March and early April, respectively, and the number of news reports increased gradually to a relatively high level starting from the end of January 2020. In the United States, as the total RSVs reached a peak around mid to late March 2020, the total RSVs began to decline, while the amount of low-level news coverage suddenly increased to a relatively high level at the end of March 2020. In other countries, the total RSVs and the number of news coverage items spiked in mid-March, but the growth of total RSVs occurred slightly earlier than that of news coverage items. Across all patterns, the total RSVs gradually dropped to the baseline level after the peaks from mid-March to early April, while the news coverage items remained at a higher level. shows the time-lag correlation between the overall RSV for the topic “treatments and medical resources” and the new daily cases. With the exception of Singapore, there was a positive correlation between the overall RSV for the “treatments and medical resources” topic and the new daily cases in all countries, with the highest correlation being 0.8 for the United States. Also, we divided the eight countries into three categories: (1) Singapore; (2) the United States, the United Kingdom, Canada, South Africa, and Ireland; and (3) Australia and New Zealand. In Singapore, the overall RSV for the “treatments and medical resources” topic gradually decreased within 17 days before the peak of new daily cases of COVID-19; after forming the peak of new cases, there was a clear negative correlation. In the second category of countries (ie, the United States, the United Kingdom, Canada, South Africa, and Ireland), the overall RSV for the “treatments and medical resources” topic was maintained at a high level for about 17 days before the peak of new daily cases was formed, and then decreased gradually; the correlation remained above 0.2. In other words, the correlation between the overall RSVs of these countries and the new daily cases was maintained at a medium to high level during the time lag of –17 to 17 days. In the third category of countries (ie, Australia and New Zealand), about 1 day and 6 days before forming the peak of new daily infections, the overall RSV for the “treatments and medical resources” topic reached the highest levels, with the maximum correlations being close to 0.8 and 0.7. The time-lag correlation between –17 and 17 days showed a high curve trend in the middle and was low on both sides. shows that there was a positive correlation between the overall RSV for the topic “diseases” and the number of daily news items in eight countries, with the highest correlation coefficient exceeding 0.8; this indicated that as the number of search queries on the topic of “diseases” increased, the number of daily news items related to COVID-19 also showed an increasing trend. We divided the eight countries into two categories. The first category included only the United States; its maximum correlation appeared in the 17 days before the largest number of daily news reports, and then the correlation gradually decreased within the time lag from –17 to 17 days and showed an obvious negative linear trend. That is, the public’s interest in the topic of “diseases” reached its peak 17 days before the peak of news coverage and then gradually decreased over time. The second category included the United Kingdom, Canada, Ireland, Singapore, Australia, South Africa, and New Zealand. During the 17 days before the largest amount of daily news, public interest in the topic of “diseases” remained high. Most of these countries reached the highest level of public interest in “diseases” in about 1 day before the largest amount of daily news; the maximum correlation was close to 0.8. However, within 17 days after the largest amount of daily news, the public gradually lost interest, but most of the correlations remained above 0.2; that is, the correlations maintained a moderate level. Figures S1 to S3 in show the results of the time-lag analysis between the overall RSVs for the topics “diseases,” “symptoms and signs,” and “public measures” and the number of new daily cases. Figures S4 to S6 in show the results of the time-lag analysis between the overall RSVs of the topics “treatments and medical resources,” “symptoms and signs,” and “public measures” and the number of daily news items. Table S1 in reports the effect of the first COVID-19 case on the RSVs of the search terms for the topic “symptoms and signs.” Principal Findings Regarding the search trends of the topic “diseases,” all of the search peaks were earlier than new cases of COVID-19; this was similar to other studies . When “coronavirus” was used as a search term, this term caused a spike of interest in all countries around January 20, 2020. On that day, the Chinese authorities announced that the virus was contagious, and the first case was found in the United States, which may have prompted the public to quickly recognize the threat and raised public interest. The term “covid-19” was first published by the WHO on February 11, 2020. Since then, its search volume has gradually increased and surpassed the terms “coronavirus” and “pneumonia” to become the main search term for this pandemic. The above findings showed that there were changes in public interest in external events related to the COVID-19 outbreak, indicating that Google Trends had the potential to be used as a tool to monitor public reaction and emotion regarding threatening events . Regarding the search trends of the topic “treatments and medical resources,” the public was the least interested in the term “ventilator,” despite this being an important piece of medical equipment for the treatment of COVID-19 patients, and there was a shortage of ventilators in some countries or regions during the epidemic, such as New York City . However, the majority of healthy persons were more concerned with masks than ventilators. Furthermore, wearing masks is an important means of preventing infection and plays a crucial role in curbing the COVID-19 epidemic . In the situation of mask shortages , the public’s interest in the term “mask” showed great fluctuation; although the reasons for the change in search behaviors were complex, it largely reflected public concern about the shortage of masks to some extent. In addition to masks, vaccination is an important way to end the COVID-19 pandemic ; as such, rising public concern reflected by the term “vaccine” was observed in our study, which was consistent with the findings in a previous study by Paguio et al . In the face of the rapid spread of COVID-19 and the lack of effective vaccines, the public has paid much attention to vaccine research, in part reflected by the panic related to the urgent public need for COVID-19 vaccines, which might also indicate hope in ending the current pandemic . Furthermore, in the time-lag correlation analysis, there was a positive correlation between the overall RSV for the topic “treatments and medical resources” and new daily cases for all countries except Singapore, where the maximum correlation coefficient exceeded 0.8 for the United States. In addition, the overall RSV peak for the topic “treatments and medical resources” occurred 0 to 17 days earlier than the peak for new daily cases. The positive correlation coefficient showed that as the search volume increased in this study, the number of new daily cases also showed increasing trends. These results were similar to those from other studies ; therefore, Google Trends has the potential to become a useful tool for disease prevention and control. Moreover, Ali et al found that by observing Google Trends, the public’s interest in telemedicine continued to increase. However, in most countries and regions, the health care system’s digital equipment was unable to meet growing public demand, which reminded relevant stakeholders to incorporate telemedicine into the health care system to combat pandemics. In a study by Nikolopoulos et al , the researchers also used Google Trends data and simulated government policies to model and successfully predict the excessive demand for products and services during the pandemic. The results showed that Google Trends data could identify the dynamic process of prediction and supply chain management directions in order to assist decision makers in making many key decisions on supply chain and disease prevention strategies. Therefore, Google Trends could be used to capture the public’s early concerns or needs in order to identify fluctuations in public demands . During a public health crisis, the RSV increase for specific topics or terms could be regarded as public demands or needs; we could translate these public demands into practice to formulate reasonable countermeasures to respond quickly . For example, Google Trends could provide an opportunity to formulate production plans to avoid supply chain disruptions and ensure reasonable allocation of resources. Specifically, the government could arrange special fiscal budgets in advance to cover expenses related to public health emergencies and their associated impacts, such as subsidies for companies that produce masks and ventilators . However, we still need more research to provide much more evidence about the predictive value in supporting decision-making policies. For risk surveillance of emerging infectious diseases, syndromic surveillance might detect health threats faster than traditional surveillance systems, thus making timely public health action more likely . Recently, Google Trends data have been applied to syndromic surveillance: this is based on the principle that when patients have a certain symptom, they are likely to search for the description of this symptom on Google. When the RSV of one particular symptom is increasing, the syndromic monitors can be alerted after a series of extensive analyses . In this study of “symptoms and signs” search trends, fever and cough were symptoms that the public was most concerned about in most countries, which have been reported as the most common symptoms of COVID-19 . Meanwhile, the results of the time-lag correlation analysis showed that the search peaks for the “fever” and “cough” terms were 1 to 17 days earlier than the peak of new cases in each country, with the maximum correlation coefficient being close to 0.9 for Australia; this supports Google Trends data indicating that the above symptoms seemed to act as a warning function during the early epidemic period. Also, many researchers had used specific search data to accurately estimate the level of weekly influenza activity . In other words, there might be a certain relationship between search query data and the number of new cases, which is likely to be useful for surveillance, prevention, and control of COVID-19. However, there has been debate about the usefulness of Google search query data for predicting pandemics; the cancellation of GFT suggests that the predictions by this tool might not be sufficiently accurate . Generally, syndromic surveillance often cannot fully reflect the epidemic status of the disease and will be affected by other factors, such as news coverage and important events . In other studies, media reports have been proven to be an important factor affecting search query interest . In this study, the peak RSV was earlier than the peak number of news reports, and the trend of RSV was still positively correlated with the number of news reports (Figure S5 in ). Therefore, although the predictive value of Google Trends is questionable, future research studies might need to eliminate the influence of factors such as media reports. For the prevention and control of infectious diseases, quarantine, social distancing, and lockdown are all public measures that are used to control the source of infection and block the route of transmission, which are extremely important for the prevention and control of COVID-19 . Regarding the “public measures” topic, the search trend peak was formed in mid to late March, and the corresponding important event was that the lockdown policies of most countries were also released and implemented in mid to late March . Similarly, from the results of the time-lag correlation analysis, the peak public interest in all countries except the United States was close to the peak number of news reports, but the peak of reporting on COVID-19–related news was slightly later than the peak of the public’s interest (Figure S6 in ). Moreover, the RSV of the term “lockdown” was significantly higher than that of the term “social distancing.” In addition to indicating that citizens in most countries were more interested in the term “lockdown,” it might be that the public was not clear about the meaning of the public measure of “lockdown.” The effectiveness of public measure interventions depends not only on strong policies but also on the correct cognition and compliance of the public measures. Thus, if the public lacked interest or understanding in public measures, this could jeopardize COVID-19 prevention and control . Also, news media is an important tool for achieving good risk communication at the early stage of an infectious disease epidemic and for improving the control effect of policies or measures . Therefore, before or at the initial stage of implementing new policies or measures, the government can use the news media to propagate policies and develop a good risk communication strategy to obtain high-quality health communication effects to better control the spread of COVID-19 . When comparing search query trends with news coverage, the search query trends showed public interest, and the news reflected mass health communication. Also, the number of new cases was one indicator reflecting the severity of the epidemic and the level of prevention and control. Under the eight countries’ different cultural, political, and epidemic situations, there were three health communication patterns: (1) the pattern for Singapore, (2) the pattern for the United States, and (2) the pattern for the other countries. Regarding the pattern for Singapore, it was quite different from that of the other countries. The biggest difference was that the search query peaks appeared earlier than those of the other countries, indicating that Singaporeans were more concerned in the early period of the epidemic. Moreover, in Singapore, the results of the time-lag analysis between the “treatments and medical resources” topic, the “symptoms and signs” topic, the number of daily news items, and the number of new daily cases were also different from those of the other countries. The correlation was negative and low. Among them, the correlation between the Singaporean public’s search interest in “treatments and medical resources” and the number of daily news items was low (Figure S4 in ), indicating that at the early stage of the COVID-19 epidemic, the Singapore public’s early attention toward “treatments and medical resources” was less likely to be affected by the number of news reports, but was likely to be affected by other factors. Two main reasons could be used to explain the Singapore public’s interest. One was that Singapore, as a tourism hub, has frequent tourism-business exchanges with neighboring China. The other was that Singapore had learned hard lessons from SARS in 2003 , so it had taken various measures to control the spread of the virus early in the epidemic, such as temperature checks and health screening, public education, and quarantine. These measures potentially made the public aware of a new threat and relevant health information as soon as possible and, thus, improved the public’s sensitivity and vigilance to COVID-19 via health communications . In other words, Singapore had done a good job of containment and prevention at the early stage. Similarly, the Singaporean public’s early interest in symptoms was likely affected by other factors or events, such as the first COVID-19 case (Table S1 in ), though the determination of the cause of RSV changes needs further analysis. Regarding the patterns of the United States and other countries, the amount of news coverage in the United States was much higher than in other countries. The number of new cases was also far higher than in other countries. Therefore, to some extent, their level of news coverage related to COVID-19 was justifiable, but that might also be an illusion caused by the irregularity of the data collection methods. In general, news coverage in most countries was highly responsive to the COVID-19 epidemic in late March. In addition, the results of the time-lag correlation analysis between the number of daily news items and the overall RSV for the topic “diseases” also reflected the fact that news reports appeared later than search queries, with lag times ranging from 0 to 17 days. Moreover, the correlation between the two was relatively high and gradually decreased over time, indicating that in this study, the public’s interest in the COVID-19 outbreak occurred earlier than the appearance of news media reports. Based on Dutta-Bergman’s channel complementarity theory, Zillmann and Bryant’s selective exposure theory, and Rubin’s use and satisfaction theory, which assume that active audiences use different media channels to meet their needs , we may use these to explain the relationship between news coverage and search query trends. To be specific, in the uncertainty of this COVID-19 epidemic, there was initially little news coverage, indicating that the public was probably not sufficiently informed, so the public’s search volume was higher. As the news coverage increased, more information was available, and uncertainty decreased, as did the online search behavior of the public. However, the number of overall RSVs in the same period began to decline, which might be a kind of public desensitization for COVID-19, likely caused by continuous extensive news coverage . That is, at the early stage of the COVID-19 epidemic, there was an increase in health information–seeking behaviors because the public lacked relevant information . Therefore, in this case, Google Trends could reflect information needs and potentially provide appropriate window periods and locations for risk communication and health communication . In the face of emerging infectious diseases, the public lacks relevant information, and timely and effective risk communication is necessary. News media is a key resource in shaping public awareness of risks and communicating relevant health information; it has great potential to become an effective partner in health communication, which could promote risk communication and the implementation of disease prevention and control strategies . In this research, the public’s interest in different topics had different characteristics, and their interest was related to factors such as the development of the epidemic and media reports. This also reminded countries or public health departments that when communicating with the public, they should unite with the news media as soon as possible, pay close attention to changes in public interests by monitoring Google Trends search data and media reports, plan the nature and content of news items, and provide the information needed by the public in a more reasonable manner, in order to better prevent and control epidemics at their early stages, such as the COVID-19 epidemic . However, the RSVs of the search terms from Google Trends are relative values and do not provide the exact values of the actual search volumes. As some search terms with higher search volumes appear, the change in trend of search terms may be underestimated . As a result, it somewhat reduces the usability of Google Trends, though the linear trend of individual search terms does not change. However, in some studies, by collecting more data to analyze seasonal differences and long-term trends, we can further analyze whether there are changes in search terms and explore the meaning and reasons of these changes . In addition, Google Trends has the characteristic of being available in real time, which can not only be used to monitor public emotions, reactions, and needs in real time, but can also be used to evaluate the effects of risk communication and public health interventions and the impact of major events or policies, among other factors. For example, interrupted time series analysis was used to assess the impact of celebrity suicides on search volumes, as well as the impact of tobacco control policies on search rates for smoking cessation information, in order to evaluate the effectiveness of the policy implementation . In the internet era, with the popularity of mobile terminals, online searching is a two-way communication process, including sending search requests and receiving search results. Sending a search request reflects the public’s response to the severity and urgency of the risk and actual needs, and receiving search results provides feedback in response to the public’s views on their ability and effectiveness to manage or respond to risks . Therefore, timely responses and exploration of data are very important, and Google Trends has the characteristic of real-time availability. In addition, Google Trends can also integrate more data sources, such as Twitter and Facebook, among others, so Google Trends data are still valuable . Limitations Google Trends has its own limitations. For example, it is more applicable to study high-prevalence diseases in countries where the internet is popular and when providing a relative versus exact value for search volume. Due to Google’s existing language limitations , we only studied the major English-speaking countries. Also, Google search data and news data might not be comprehensive enough and might not have included all of the search terms or topics related to COVID-19. For example, we did not include some important symptoms (eg, “loss of taste or smell”), and we omitted some similar terms such as “Wuhan virus.” In addition, “pneumonia” was not related only to COVID-19, but could also be related to influenza. Also, there was no one-to-one correspondence between news coverage data and search terms and topics. Therefore, further studies should apply detailed search terms and extract more news data to explore additional values. Conclusions Through Google Trends, we identified the level of public interest for various aspects at the early stages of the COVID-19 epidemic, learned about public concern and neglect, and revealed the potential value of Google Trends in monitoring public response and demand, prediction, and other aspects in the face of the occurrence of emerging infectious diseases. In addition, news media as an essential source of information, combined with Google Trends, could achieve more effective health communication. Therefore, both news coverage and Google search trends could potentially contribute to the prevention and control of epidemics at the early epidemic stage. Regarding the search trends of the topic “diseases,” all of the search peaks were earlier than new cases of COVID-19; this was similar to other studies . When “coronavirus” was used as a search term, this term caused a spike of interest in all countries around January 20, 2020. On that day, the Chinese authorities announced that the virus was contagious, and the first case was found in the United States, which may have prompted the public to quickly recognize the threat and raised public interest. The term “covid-19” was first published by the WHO on February 11, 2020. Since then, its search volume has gradually increased and surpassed the terms “coronavirus” and “pneumonia” to become the main search term for this pandemic. The above findings showed that there were changes in public interest in external events related to the COVID-19 outbreak, indicating that Google Trends had the potential to be used as a tool to monitor public reaction and emotion regarding threatening events . Regarding the search trends of the topic “treatments and medical resources,” the public was the least interested in the term “ventilator,” despite this being an important piece of medical equipment for the treatment of COVID-19 patients, and there was a shortage of ventilators in some countries or regions during the epidemic, such as New York City . However, the majority of healthy persons were more concerned with masks than ventilators. Furthermore, wearing masks is an important means of preventing infection and plays a crucial role in curbing the COVID-19 epidemic . In the situation of mask shortages , the public’s interest in the term “mask” showed great fluctuation; although the reasons for the change in search behaviors were complex, it largely reflected public concern about the shortage of masks to some extent. In addition to masks, vaccination is an important way to end the COVID-19 pandemic ; as such, rising public concern reflected by the term “vaccine” was observed in our study, which was consistent with the findings in a previous study by Paguio et al . In the face of the rapid spread of COVID-19 and the lack of effective vaccines, the public has paid much attention to vaccine research, in part reflected by the panic related to the urgent public need for COVID-19 vaccines, which might also indicate hope in ending the current pandemic . Furthermore, in the time-lag correlation analysis, there was a positive correlation between the overall RSV for the topic “treatments and medical resources” and new daily cases for all countries except Singapore, where the maximum correlation coefficient exceeded 0.8 for the United States. In addition, the overall RSV peak for the topic “treatments and medical resources” occurred 0 to 17 days earlier than the peak for new daily cases. The positive correlation coefficient showed that as the search volume increased in this study, the number of new daily cases also showed increasing trends. These results were similar to those from other studies ; therefore, Google Trends has the potential to become a useful tool for disease prevention and control. Moreover, Ali et al found that by observing Google Trends, the public’s interest in telemedicine continued to increase. However, in most countries and regions, the health care system’s digital equipment was unable to meet growing public demand, which reminded relevant stakeholders to incorporate telemedicine into the health care system to combat pandemics. In a study by Nikolopoulos et al , the researchers also used Google Trends data and simulated government policies to model and successfully predict the excessive demand for products and services during the pandemic. The results showed that Google Trends data could identify the dynamic process of prediction and supply chain management directions in order to assist decision makers in making many key decisions on supply chain and disease prevention strategies. Therefore, Google Trends could be used to capture the public’s early concerns or needs in order to identify fluctuations in public demands . During a public health crisis, the RSV increase for specific topics or terms could be regarded as public demands or needs; we could translate these public demands into practice to formulate reasonable countermeasures to respond quickly . For example, Google Trends could provide an opportunity to formulate production plans to avoid supply chain disruptions and ensure reasonable allocation of resources. Specifically, the government could arrange special fiscal budgets in advance to cover expenses related to public health emergencies and their associated impacts, such as subsidies for companies that produce masks and ventilators . However, we still need more research to provide much more evidence about the predictive value in supporting decision-making policies. For risk surveillance of emerging infectious diseases, syndromic surveillance might detect health threats faster than traditional surveillance systems, thus making timely public health action more likely . Recently, Google Trends data have been applied to syndromic surveillance: this is based on the principle that when patients have a certain symptom, they are likely to search for the description of this symptom on Google. When the RSV of one particular symptom is increasing, the syndromic monitors can be alerted after a series of extensive analyses . In this study of “symptoms and signs” search trends, fever and cough were symptoms that the public was most concerned about in most countries, which have been reported as the most common symptoms of COVID-19 . Meanwhile, the results of the time-lag correlation analysis showed that the search peaks for the “fever” and “cough” terms were 1 to 17 days earlier than the peak of new cases in each country, with the maximum correlation coefficient being close to 0.9 for Australia; this supports Google Trends data indicating that the above symptoms seemed to act as a warning function during the early epidemic period. Also, many researchers had used specific search data to accurately estimate the level of weekly influenza activity . In other words, there might be a certain relationship between search query data and the number of new cases, which is likely to be useful for surveillance, prevention, and control of COVID-19. However, there has been debate about the usefulness of Google search query data for predicting pandemics; the cancellation of GFT suggests that the predictions by this tool might not be sufficiently accurate . Generally, syndromic surveillance often cannot fully reflect the epidemic status of the disease and will be affected by other factors, such as news coverage and important events . In other studies, media reports have been proven to be an important factor affecting search query interest . In this study, the peak RSV was earlier than the peak number of news reports, and the trend of RSV was still positively correlated with the number of news reports (Figure S5 in ). Therefore, although the predictive value of Google Trends is questionable, future research studies might need to eliminate the influence of factors such as media reports. For the prevention and control of infectious diseases, quarantine, social distancing, and lockdown are all public measures that are used to control the source of infection and block the route of transmission, which are extremely important for the prevention and control of COVID-19 . Regarding the “public measures” topic, the search trend peak was formed in mid to late March, and the corresponding important event was that the lockdown policies of most countries were also released and implemented in mid to late March . Similarly, from the results of the time-lag correlation analysis, the peak public interest in all countries except the United States was close to the peak number of news reports, but the peak of reporting on COVID-19–related news was slightly later than the peak of the public’s interest (Figure S6 in ). Moreover, the RSV of the term “lockdown” was significantly higher than that of the term “social distancing.” In addition to indicating that citizens in most countries were more interested in the term “lockdown,” it might be that the public was not clear about the meaning of the public measure of “lockdown.” The effectiveness of public measure interventions depends not only on strong policies but also on the correct cognition and compliance of the public measures. Thus, if the public lacked interest or understanding in public measures, this could jeopardize COVID-19 prevention and control . Also, news media is an important tool for achieving good risk communication at the early stage of an infectious disease epidemic and for improving the control effect of policies or measures . Therefore, before or at the initial stage of implementing new policies or measures, the government can use the news media to propagate policies and develop a good risk communication strategy to obtain high-quality health communication effects to better control the spread of COVID-19 . When comparing search query trends with news coverage, the search query trends showed public interest, and the news reflected mass health communication. Also, the number of new cases was one indicator reflecting the severity of the epidemic and the level of prevention and control. Under the eight countries’ different cultural, political, and epidemic situations, there were three health communication patterns: (1) the pattern for Singapore, (2) the pattern for the United States, and (2) the pattern for the other countries. Regarding the pattern for Singapore, it was quite different from that of the other countries. The biggest difference was that the search query peaks appeared earlier than those of the other countries, indicating that Singaporeans were more concerned in the early period of the epidemic. Moreover, in Singapore, the results of the time-lag analysis between the “treatments and medical resources” topic, the “symptoms and signs” topic, the number of daily news items, and the number of new daily cases were also different from those of the other countries. The correlation was negative and low. Among them, the correlation between the Singaporean public’s search interest in “treatments and medical resources” and the number of daily news items was low (Figure S4 in ), indicating that at the early stage of the COVID-19 epidemic, the Singapore public’s early attention toward “treatments and medical resources” was less likely to be affected by the number of news reports, but was likely to be affected by other factors. Two main reasons could be used to explain the Singapore public’s interest. One was that Singapore, as a tourism hub, has frequent tourism-business exchanges with neighboring China. The other was that Singapore had learned hard lessons from SARS in 2003 , so it had taken various measures to control the spread of the virus early in the epidemic, such as temperature checks and health screening, public education, and quarantine. These measures potentially made the public aware of a new threat and relevant health information as soon as possible and, thus, improved the public’s sensitivity and vigilance to COVID-19 via health communications . In other words, Singapore had done a good job of containment and prevention at the early stage. Similarly, the Singaporean public’s early interest in symptoms was likely affected by other factors or events, such as the first COVID-19 case (Table S1 in ), though the determination of the cause of RSV changes needs further analysis. Regarding the patterns of the United States and other countries, the amount of news coverage in the United States was much higher than in other countries. The number of new cases was also far higher than in other countries. Therefore, to some extent, their level of news coverage related to COVID-19 was justifiable, but that might also be an illusion caused by the irregularity of the data collection methods. In general, news coverage in most countries was highly responsive to the COVID-19 epidemic in late March. In addition, the results of the time-lag correlation analysis between the number of daily news items and the overall RSV for the topic “diseases” also reflected the fact that news reports appeared later than search queries, with lag times ranging from 0 to 17 days. Moreover, the correlation between the two was relatively high and gradually decreased over time, indicating that in this study, the public’s interest in the COVID-19 outbreak occurred earlier than the appearance of news media reports. Based on Dutta-Bergman’s channel complementarity theory, Zillmann and Bryant’s selective exposure theory, and Rubin’s use and satisfaction theory, which assume that active audiences use different media channels to meet their needs , we may use these to explain the relationship between news coverage and search query trends. To be specific, in the uncertainty of this COVID-19 epidemic, there was initially little news coverage, indicating that the public was probably not sufficiently informed, so the public’s search volume was higher. As the news coverage increased, more information was available, and uncertainty decreased, as did the online search behavior of the public. However, the number of overall RSVs in the same period began to decline, which might be a kind of public desensitization for COVID-19, likely caused by continuous extensive news coverage . That is, at the early stage of the COVID-19 epidemic, there was an increase in health information–seeking behaviors because the public lacked relevant information . Therefore, in this case, Google Trends could reflect information needs and potentially provide appropriate window periods and locations for risk communication and health communication . In the face of emerging infectious diseases, the public lacks relevant information, and timely and effective risk communication is necessary. News media is a key resource in shaping public awareness of risks and communicating relevant health information; it has great potential to become an effective partner in health communication, which could promote risk communication and the implementation of disease prevention and control strategies . In this research, the public’s interest in different topics had different characteristics, and their interest was related to factors such as the development of the epidemic and media reports. This also reminded countries or public health departments that when communicating with the public, they should unite with the news media as soon as possible, pay close attention to changes in public interests by monitoring Google Trends search data and media reports, plan the nature and content of news items, and provide the information needed by the public in a more reasonable manner, in order to better prevent and control epidemics at their early stages, such as the COVID-19 epidemic . However, the RSVs of the search terms from Google Trends are relative values and do not provide the exact values of the actual search volumes. As some search terms with higher search volumes appear, the change in trend of search terms may be underestimated . As a result, it somewhat reduces the usability of Google Trends, though the linear trend of individual search terms does not change. However, in some studies, by collecting more data to analyze seasonal differences and long-term trends, we can further analyze whether there are changes in search terms and explore the meaning and reasons of these changes . In addition, Google Trends has the characteristic of being available in real time, which can not only be used to monitor public emotions, reactions, and needs in real time, but can also be used to evaluate the effects of risk communication and public health interventions and the impact of major events or policies, among other factors. For example, interrupted time series analysis was used to assess the impact of celebrity suicides on search volumes, as well as the impact of tobacco control policies on search rates for smoking cessation information, in order to evaluate the effectiveness of the policy implementation . In the internet era, with the popularity of mobile terminals, online searching is a two-way communication process, including sending search requests and receiving search results. Sending a search request reflects the public’s response to the severity and urgency of the risk and actual needs, and receiving search results provides feedback in response to the public’s views on their ability and effectiveness to manage or respond to risks . Therefore, timely responses and exploration of data are very important, and Google Trends has the characteristic of real-time availability. In addition, Google Trends can also integrate more data sources, such as Twitter and Facebook, among others, so Google Trends data are still valuable . Google Trends has its own limitations. For example, it is more applicable to study high-prevalence diseases in countries where the internet is popular and when providing a relative versus exact value for search volume. Due to Google’s existing language limitations , we only studied the major English-speaking countries. Also, Google search data and news data might not be comprehensive enough and might not have included all of the search terms or topics related to COVID-19. For example, we did not include some important symptoms (eg, “loss of taste or smell”), and we omitted some similar terms such as “Wuhan virus.” In addition, “pneumonia” was not related only to COVID-19, but could also be related to influenza. Also, there was no one-to-one correspondence between news coverage data and search terms and topics. Therefore, further studies should apply detailed search terms and extract more news data to explore additional values. Through Google Trends, we identified the level of public interest for various aspects at the early stages of the COVID-19 epidemic, learned about public concern and neglect, and revealed the potential value of Google Trends in monitoring public response and demand, prediction, and other aspects in the face of the occurrence of emerging infectious diseases. In addition, news media as an essential source of information, combined with Google Trends, could achieve more effective health communication. Therefore, both news coverage and Google search trends could potentially contribute to the prevention and control of epidemics at the early epidemic stage.
Asymmetric Synthesis of Tetrasubstituted α-Aminophosphonic Acid Derivatives
080752d6-301f-4752-b79a-3c4dfd88e133
8199250
Pharmacology[mh]
α-Amino acids are a key structure in living organisms as the essential part of proteins and peptides. Many α-amino acid derivatives are used in daily life, such as the sweetener aspartame, penicillin-derived antibiotics or antihypertensive enalapril. Due to the relevance of α-amino acids in nature, a vast number of methods for the synthesis of natural and non-natural α-amino acids have been developed . Within the most relevant α-amino acid mimetics, α-aminophosphonic acids are the result of a bioisosteric substitution of the planar carboxylic acid by a phosphonic acid group in α-amino acid structures . This isosteric replacement is of great interest since, due to the tetrahedral configuration of the phosphorus atom, α-aminophosphonic acid derivatives can behave as stable analogues of the transition state for the cleavage of peptides, thus inhibiting enzymes involved in proteolysis processes and, consequently, they display assorted biological activities . In particular, a number of α-aminophosphonic acid derivatives have found applications as agrochemicals , as well as antimicrobial , antioxidant or anticancer agents . The thalidomide disaster was a shocking revelation of the strong dependence of the biological activity of chiral substrates into their absolute configuration. This dependence is also evident for α-aminophosphonic acid derivatives and, as examples, ( R )-phospholeucine exhibits a stronger activity as leucine-peptidase inhibitor than its enantiomer , and the phosphopeptide ( S ),( R )-alaphosphalin has a more efficient antibiotic activity than its other three possible isomers . α-Aminophosphonic acids are usually obtained from the hydrolysis of their phosphonate esters and, for this reason, the development of efficient synthetic methodologies to access enantioenriched α-aminophosphonates has become an imperative task in organic chemistry. The existing literature to date in this field is mostly related to the synthesis of trisubstituted α-aminophosphonates and the examples illustrating asymmetric strategies leading to the formation of the tetrasubstituted substrates are scarce . The efficient formation of quaternary centers is known as a critical challenge in organic synthesis and the formation of tetrasubstituted centers from ketimines was for a long time unachievable. The poor electrophilic character of the ketimine group and the additional steric hindrance on the substrate that results in a decreased reactivity are the two main challenges to overcome. In addition, the enantiotopic faces of ketimine substrates are not as easily discriminated as those of aldimines if asymmetric syntheses are required . The existing methods for the synthesis of tetrasubstituted α-aminophosphonates can be classified in three main groups, depending on the type of bond created in the key reaction leading to their formation ). The first of those approaches implies the use of strategies that entail C-C bond formation, either through the addition of carbon nucleophiles to α-phosphorylated imines ( a) or functionalization of α-aminophosphonate anions with electrophiles ( b). In addition, the most straightforward method for the synthesis of α-aminophosphonates comprises reactions that imply C-P bond formation through the addition of phosphorus nucleophiles to ketimines ( c). Another alternative to these routes consists of processes that involve C-N bond formation, which are carried out mainly through electrophilic amination reactions ( d). In the following lines of this review, the existing methodologies regarding the asymmetric synthesis of tetrasubstituted α-aminophosphonates are summarized. The synthetic routes for the preparation of these compounds are classified into diastereoselective and enantioselective methodologies and grouped by the type of bond formed in the key step. 2.1. C-C Bond Formation A simple strategy for the preparation of tetrasubstituted α-aminophosphonates is the functionalization of tetrasubstituted α-aminophosphonates, taking advantage of the acidic nature of the hydrogen atom adjacent to the phosphorus substituent. Using this approach, Seebach described in 1995 the first example of a stereoselective synthesis of tetrasubstituted α-aminophosphonic acids . As shown in , racemic imidazolidinone 2 is first obtained starting from glycine ester 1 , by formation of the amide derivative with dimethylamine and subsequent condensation of the amino group with pivalaldehyde. Then, a kinetic resolution using ( R )-mandelic acid 3 is carried out, allowing the isolation of diastereomeric salt ( R , R )- 4 , which is treated first with NaOH and then with Boc 2 O and DMAP to yield enantiomerically pure imidazolidinone 5 . Then bromide 6 is formed via radical halogenation, and the subsequent Arbuzov reaction with trimethylphosphite leads to a single isomer of tertiary α-aminophosphonate 7 in moderate yield . Next, in the key step, compound 7 is treated with a strong base and an alkyl, allyl or benzyl halide, leading to the formation of tetrasubstituted α-aminophosphonates 8 in good yields and excellent diastereoselectivities (72–83%, >98:2 dr). In all cases, the electrophile reagent approaches from the less hindered face of imidazolidinone ring, in an anti -addition. In order to obtain the acyclic α-aminophosphonic acid derivative 9 , the authors performed a reduction of the amide carbonyl in 8 and a hydrolysis of the resulting intermediate in aqueous HCl . In addition, the authors extended their methodology to the use of imidazolidines, instead of amides 7 , obtaining, after the addition of benzyl, methyl or allyl halides, cis tetrasubstituted α-aminophosphonates in moderate yields and diastereoselectivities (33–53%, 1.3:1–3.8:1 dr). However, the use of ethyl, propyl and butyl halides gave trans products in better yields and diastereomeric ratios (45–60%, 1:17 ≤ 1:50 dr). A similar procedure was developed by Davis for the synthesis of pyrrolidine-derived α-aminophosphonates 17 . In this case, sulfinyl imine 10 is attacked by the enolate derived from ethyl acetate, to obtain N -sulfinyl β-amino ester 11 in a diastereoselective fashion. Then, the addition of lithium methyl phosphonate leads to N -sulfinyl δ-amino β-ketophosphonate 12 in very good yield. Next, the sulfinyl protecting group is easily removed and replaced with a Boc group, obtaining amide 13 which, after treatment with 4-acetamidobenzenesulfonyl azide, provides the corresponding α-diazo derivative 14 . Finally, the treatment of α-diazophosphonate 14 in the presence of Rh 2 (OAc) 4 catalyst yields cis pyrrolidine phosphonate 15 , as the major diastereoisomer (68%, 81:19 dr) . Tertiary α-aminophosphonate 15 can be functionalized, with retention of the configuration, using a strong base and allyl bromide, providing tetrasubstituted α-aminophosphonate 16 . Although substrate 16 is obtained as a mixture of rotamers, the removal of N -Boc group renders pyrrolidin-3-one 17 as a single isomer. Additionally, acyclic α-amino α-ketophosphonate 18 can be also prepared after a ring-opening process via a Pd-catalyzed hydrogenation . Following the same principle, Amedjkouh described the synthesis of bicyclic α-aminophosphonate 24 . In this case, the synthetic methodology starts with the preparation of oxazolopyrrolidine phosphonate 23 from ( R )-phenylglycinol ( 19 ), benzotriazole ( 20 ), and 2,5-dimethoxytetrahydrofuran ( 21 ), obtaining enantiomerically pure oxazolopyrrolidine 22 , by formation of succinaldehyde from the hydrolysis of furan derivative 21 and subsequent multicomponent reaction with substrates 19 and 20 . Then, tertiary oxazolopyrrolidine phosphonate 23 is formed through Arbuzov reaction of oxazolopyrrolidine 22 and triethyl phosphite . The treatment of α-aminophosphonate 23 with butyllithium followed by the addition of alkyl halides results in the formation of tetrasubstituted α-aminophosphonates 24 with total retention of the configuration, and yields that are moderate to good, when using aliphatic halides (35–81%), but low if benzyl halide is used (10%). Finally, the elimination of the chiral auxiliary via catalytic hydrogenation affords optically pure phosphoproline derivative 25 . According to the authors, the high diastereoselectivity observed for this transformation is related to a proposed transition state TS1 . Thus, the lithium ion is coordinated with the phosphonate oxygen and the tertiary nitrogen atoms, forming a five-membered ring pseudocycle, where the σ* P=O acceptor orbital lies in parallel to the lone pair of the anion, which is therefore stabilized by hyperconjugation. Under this conformation, the functionalization with the alkyl group occurs with retention of configuration. Another possibility for the asymmetric formation of tetrasubstituted α-aminophosphonates implying C-C bond formation relies on the addition of carbon nucleophiles to chiral ketimines. In this context, our research group described in 2013 a methodology for the preparation of tetrasubstituted α-aminophosphonates through the addition of carbon nucleophiles to α-phosphorated ketimines 29 . First, Pudovic reaction of TADDOL-derived chiral phosphite 26 with imine 27 affords trisubstituted α-aminophosphonate 28 as a mixture of diastereoisomers (93%, 1:1 dr). α-Aminophosphonate 28 is then transformed into the enantiopure chiral ketimine 29 through a formal oxidation, consisting of an initial chlorination, followed by a β-elimination of hydrogen chloride using a polymeric base. Then, organometallic species react fast with α-iminophosphonate 29 , delivering tetrasubstituted α-aminophosphonates 30 in very good yields. Better results in terms of diastereoselectivity are obtained in this reaction when aliphatic Grignard reagents (R = Me, 80%, 94:6 dr) are used if compared to the aromatic substrate R = 2-naphthyl, 81%, 55:45 dr). Additionally, the hydrolysis of the tosyl and chiral auxiliary group affords tetrasubstituted α-aminophosphonic acid ( S )- 31 . In the proposed transition state, depicted in , the phosphorus-containing seven-membered ring adopts a more stable boat conformation, which is fixed by the trans configuration of the five-membered fused ring. Under this conformation, the two heteroatoms must embrace the more stable equatorial orientation, forcing the two hydrogen atoms to the axial positions. According to this model, the nucleophilic attack to the Re -face is substantially favored, due to the presence of the axial phenyl groups blocking the Si face. Menthol-derived phosphonate imines 32 are another kind of useful chiral electrophiles used in the diastereoselective preparation of tetrasubstituted α-aminophosphonates . For example, the use of proline ( I) as a chiral catalyst in the addition of acetone ( 33 ) gives α-aminophosphonate 34 in good yield and diastereoselectivity (86%, 1:30 dr). The authors also describe the addition of other nucleophiles to ketimines 32 , such as pyrroles 35 , indole ( 37 ), and nitromethane, to yield α-aminophosphonate derivatives 36 , 38 and 39 , although in these cases, they observe lower diastereoselectivities . The same research group used also chiral N -methylbenzylimines 41 in cycloaddition reactions for the asymmetric preparation of tetrasubstituted α-aminophosphonates 43 . In his case, the synthesis of chiral imine 41 comprises an initial treatment of ( S )-1-phenylethylamine (( S )- 40 ) with trifluoroacetic acid and triphenylphosphine, in the presence of trimethylamine in order to form haloimine 41 . Then, the Arbuzov reaction with triethyl phosphite, yields the corresponding fluorinated α-ketiminophosphonate 42 . Upon treatment with diazomethane, ketimine 42 undergoes a cycloaddition reaction that leads the formation of triazoline-derived α-aminophosphonate 43 in good yield but with moderate diastereoselectivity (83%, 2.5:1 dr). Although the authors did not assign the configuration of the major isomer, both can be separated through chromatography. 2.2. C-P Bond Formation Another common strategy for the preparation of tetrasubstituted α-iminophosphonates comprises the addition of phosphorus nucleophiles to ketimines. For example, Davis described in 2001 the use of p -toluensulfinyl imines 44 as starting materials for the diastereoselective preparation of tetrasubstituted α-aminophosphonates 45 . In this report, chiral ketimines 44 are treated with lithium diethyl phosphite at low temperature to obtain α-aminophosphonates 45 in excellent yields and diastereoselectivities (71–97%, 82:18–99:1 dr). In the last step, enantiomerically pure α-aminophosphonic acids 46 can be isolated by simple hydrolysis using hydrochloric acid. The high degree of diastereoselectivity in this reaction is explained by the authors as a transition state where there is a chelation of the lithium cation to both, the sulfinyl and phosphite oxygen atoms, in a seven-membered twisted-chair transition state. As shown in , TS3 is favored because, under this conformation, the bulky aryl group adopts an energetically favored equatorial position if compared to TS4 , where the aromatic substituent group must assume an energetically less favorable axial position. Following a similar methodology, the same research group describes also the preparation of tetrasubstituted phosphoproline derivative 50 . In this case, the treatment of oxo-sulfinimine 47 with lithium diethyl phosphite gives α-aminophosphonate 48 , which after treatment with hydrochloric acid, forms cyclic tetrasubstituted α-aminophosphonate 49 in good yield. Then, a syn addition of molecular hydrogen from the less hindered face yields phosphoproline derivative 50 with 50% enantiomeric excess. Based on the same principle, Yuan’s group used tert -butylsulfinyl imines 51 as chiral auxiliaries for the synthesis of tetrasubstituted α-aminophosphonates 52 , via nucleophilic addition of phosphites . Substrates 52 are obtained with high yields and diastereoselectivities in all cases (70–85%, 86:14 ≥ 98:2 dr), using different alkyl phosphites (R 2 = Me, Et, n -Pr) and several alkyl or aromatic substituents in sulfinimine 51 . In an identical way as described by Davis, substrates 52 can be transformed into α-aminophosphonic acids 53 by a simple hydrolysis. For this transformation, the authors propose the plausible transition state TS5 , where the potassium cation is chelated to the sulfinyl and phosphonate oxygens, and the nucleophilic attack of the phosphite nucleophile occurs from the less hindered face, opposite to the tert -butyl group. A few years later, Ellman’s research group described a modification of this reaction, using potassium bis(trimethylsilyl)amide (KHMDS), which favors the solubility obtaining in this way α-aminophosphonates 52 with better reaction conversions and in excellent yields but lower diastereoselectivities (88–95%, 12:1–99:1 dr) . In addition, Yuan’s group used this strategy, employing chloro-substituted sulfinyl imines 54 for the preparation of three-, four- and five-membered cyclic tetrasubstituted α-aminophosphonates 56 . The reaction proceeds efficiently using different imines 54 (R = Me, Ph; n = 1, 2, 3) and heterocyclic substrates 56 can be obtained through intermediate 55 in good yields and diastereoselectivities (75–83%, 71:29–92:8 dr). In this transformation, a strong dependence of the diasteroselectivity on the size of the cycle is observed, obtaining an excellent dr (92:8) for a three-membered cyclic substrate, while a drop into the diastereoselectivity is observed for the four-membered derivative (89:11 dr) and even lower dr values are obtained for five-membered heterocycles (71:29 dr). Likewise, in 2014, Liu and colleagues extended this strategy to the nucleophilic addition of diphenyl phosphite to fluorine-substituted α,β-unsaturated sulfinimines 57 , in this case in the presence of a rubidium catalyst . Allyl α-aminophosphonates 58 are obtained in good yields and diastereoselectivities (56–87%, 75:25–92:8 dr) with different fluoroalkyl substituents. In addition, the selective deprotection of the sulfinyl group in acidic media, to produce α-aminophosphonate 59 in good yield, is described. In 2017, Cramer reported an additional example of an asymmetric addition of phosphorus nucleophiles to imines for the preparation of tetrasubstituted α-aminophosphonates . In this work, first chiral imine 61 is obtained in good yield and enantiomeric excess (90%, 97% ee) from imidoyl chloride 60 in the presence of a palladium, catalyst II and CsOAc. Then, via a boron trifluoride-mediated hydrophosphonylation reaction of imine 61 , α-aminophosphonate 62 is formed in good yield as a single diastereoisomer. A related strategy for the asymmetric induction in the preparation of tetrasubstituted α-aminophosphonates consists of the use of acetal-derived iminium salts as electrophiles. In 2000, Fadel and colleagues detailed a one-pot synthesis of cyclopropane α-aminophosphonates 65 (R 1 = Me) using this methodology . In this example, the sequence starts with the cyclization of bromoester 63 (R 1 = Me) in the presence of sodium and TMSCl, to obtain silylated acetal 64 . Then, deprotected hemiacetal intermediate 66 is formed by an alcoholysis in presence of a catalytic amount of an acid source (TMSCl or AcOH), followed by the reaction with ( S )-1-phenylethylamine ((S)- 40 ), to furnish α-amino alcohols 67 . Under acidic conditions, intermediate 67 is converted into iminium species 68 , which undergo a nucleophilic addition of phosphite (R 2 = Me, Et) from the less hindered face of the C=N bond, to provide finally diastereoisomeric phosphonates 65 (R 1 = Me) in good yields and diastereoselectivities (60–82%, 80:20–88:12 dr). A few years later, the authors extended the scope of this reaction to differently substituted acetals 64 (R 1 = Et, Bn, i Pr, t Bu), obtaining cyclopropane-derived α-aminophosphonates 65 in good yields and excellent diastereoselectivities (56–78%, 76:24–100:1 dr). Remarkably, the use of a tert -butyl substituent provided a single diastereoisomer (100:1 dr) . Following the same approach, Faldel’s group described later the synthesis of spirocyclic α-aminophosphonates 70 . Starting also from silylated acetal 64 , deprotected acetal intermediate 66 is again formed by an alcoholysis and then, iminium species 72 is obtained by reaction with ( R )-phenylglycinol under acidic conditions. The subsequent nucleophilic addition of triethyl phosphite gives spirophosphonate 69 , by means of an intramolecular transesterification, with good yield and diastereoselectivities (71%, 89:11 dr). However, the use of norephedrine as a chiral source results in a drop in both yield and diastereomeric ratio (36%, 78:22 dr). It must be pointed out that substrates 69 are obtained as a mixture of epimers (80:20), due to the presence of an additional chiral center at the phosphorus atom. The major diastereoisomer can be isolated and further transformed into enantiopure cyclopropane-derived aminophosphonic acid 70 , by an initial hydrogenolysis reaction, followed by hydrolysis of phosphonate group. In 2007, Fadel described also a similar process, in this case using heterocyclic iminium salts . In this approach, N -Boc-protected piperidinone 73 is treated with ( S )-1-phenylethylamine (( S )- 40 ) in presence of acetic acid, followed by the nucleophilic addition of triethyl phosphite, leading to the formation of tetrasubstituted α-aminophosphonates 74 and 75 as an inseparable mixture of diastereoisomers (75%, 60:40 dr). Then, after the hydrolysis of the N -Boc protecting group, diastereoisomers 76 and 77 are formed, which in this case can be separated. In addition, the hydrogenolysis and hydrolysis reactions of each diastereoisomer gives enantiopure α-aminophosphonic acids 78 and 79 . Along the same line, the same research group described the preparation of bicyclic tetrasubstituted α-aminophosphonates 84 starting from ketone acetals 80 . The esterification reaction of acetals 80 with ( S )-phenylalanine derivative 81 , which acts as chiral auxiliary, leads to intermediate 82 , which is cyclized to form imine substrates 83 as a mixture of diastereoisomers. The formation of iminium cation in the presence of triethyl phosphite gives bicyclic tetrasubstituted α-aminophosphonates 84 in good yields and excellent diastereoselectivities (46–77%, 89:11–99:1 dr). These α-aminophosphonates are transformed into cyclic serine analogues 87 through their oxidation, and further hydrolysis of the intermediate imine 85 /enamine 86 mixture. In this case, the authors justify the high level of diastereoselectivity by an equilibrium between two iminium epimers in TS6 and TS7 through the parent enamine salt . It is estimated that the chair–boat conformation ( TS6 ) is favored in 2.14–5.56 kcal/mol relative to the epimeric twist boat–boat conformation ( TS7 ), resulting in the kinetic addition of the phosphite to the less hindered Si -face of the iminium species in TS6 . In addition, the use of bicyclic iminium salts has been reported for the asymmetric preparation of cyclic α-aminophosphonates 90 . Chiral cyclic imines 89 are synthesized from diamine 88 and ketoesters and their subsequent treatment in toluene with dialkyl phosphites gives tetrasubstituted α-aminophosphonates 90 in high yields and diastereoselectivities (78–95%, 68:32–98:2 dr) . However, if imines 89 are activated with bromotrimethylsilane, they are supposed to form an iminium ion, which is reactive towards tris(trimethylsilyl) phosphite and then, α-aminophosphonic acid derivatives 91 can be obtained in high yields and diastereoselectivities (70–99%, 85:15–98:2 dr) . Other useful strategy for the diastereoselective synthesis of tetrasubstituted α-aminophosphonates with C-P bond formation, complementary to the hydrophosphonylation of chiral imines, is the addition of chiral phosphorus nucleophiles to activated ketimines. For example, in 2011, Chen and Miao used a multicomponent Kabachnik–Fields reaction of phosphorylated chiral nucleophile 92 , diethyl phosphoramidate 93 and ketone 94 to obtain α-aminophosphonate 95 in a diastereoselective fashion . The authors propose that the nucleophilic addition in TS9 is expected to be less favored, compared to the addition proposed in TS8 , where the chiral dioxaphospholanedicarboxylate 92 , which plays a crucial role in the control of the diastereoselectivity, reacts from the sterically less hindered face. A different methodology for the asymmetric formation of tetrasubstituted α-aminophosphonates that involves C-P bond formation was reported by Hammershmidt, where a phosphoramidate-α-aminophosphonate rearrangement is described, leading to the formation of diverse α-aminophosphonates 98 in moderate to good yields and excellent stereocontrol (38–80%, 96–99% ee) . This route involves N -Boc protection of phosphoramides 96 , and metalation with sec -butyllithium to form the corresponding carbanion 99 . The rearrangement of the phosphorous substituent and the final quenching with acetic acid provides tetrasubstituted α-aminophosphonates 98 . 2.3. C-N Bond Formation The introduction of nitrogen reagents into the skeleton of phosphonates is also an alternative methodology that can be useful for the preparation of tetrasubstituted α-aminophosphonates. In this regard, in 1999 Davis and colleagues applied successfully the aza-Darzens reaction for this purpose . Starting from chiral sulfinyl imine 101 and diethyl 1-chloroethylphosphonate ( 102 ), initially, a mixture of three isomers of α-chloro-β-amino adducts 103 – 105 is obtained. The major isomer 103 can be isolated after chromatography and then, in the presence of sodium hydride, enantiomerically pure aziridine 106 is obtained. After the elimination of the chiral auxiliary group with TFA, followed by ring-opening via hydrogenolysis, enantiopure tetrasubstitued α-aminophosphonate ( R )- 107 is obtained. For the preparation of the opposite enantiomer, the side mixture of α-chloro-β-amino adducts 104 and 105 is used. After the hydrolysis of the chiral auxiliary group and the subsequent ring-opening of the corresponding aziridine intermediate, tetrasubstitued α-aminophosphonate ( S )- 107 is obtained. Continuing with strategies that entail C-N bond formation, Curtius rearrangement is also a useful method for the introduction of amino groups starting from carboxylic acids, which can be used for the preparation of tetrasubstitued α-aminophosphonates. For example, in 1999, Le Corre used enantiomerically pure chiral sulfate 108 and phosphorylated malonate derivative 109 for the preparation of cyclopropane phosphonate 110 . Then, the ester group is hydrolyzed, to obtain carboxylic acid substituted phosphonate 111 which, after activation of the acid with thionyl chloride and the addition of sodium azide, leads to acyl azide species 112 / 113 . At this point, Curtius rearrangement gives isocyanate 114 , which is captured by means of the in situ addition of benzyl alcohol, to afford N -protected amino ester 115 . Finally, the benzyl protecting group is hydrolyzed yielding cyclopropane-derived tetrasubstituted α-aminophosphonate 116 . Ito’s group also used Curtius rearrangement for the preparation of tetrasubstitued α-aminophosphonate 123 . The rhodium-catalyzed conjugate addition of cyanophosphonate 117 to acrolein ( 118 ) leads to the formation of aldehyde 119 with high yield and enantiomeric excess (80%, 92% ee). Compound 119 is then treated with phosphonium ylide 120 , and the newly formed C=C bond, prepared through the Wittig olefination, is directly hydrogenated to obtain cyanophosphoate 121 . The acidic hydrolysis of the nitrile moiety in this substrate followed by an in situ esterification of the carboxylic acid intermediate with diazomethane affords phophorated methyl ester 122 . Finally, the methoxycarbonyl group in 122 is selectively hydrolyzed under basic conditions, and the resulting carboxylate treated with diphenyl phosphoroazidate, which by means of a Curtius rearrangement followed by trapping with benzyl alcohol, affords tetrasubstitued α-aminophosphonate 123 (81%, 88% ee). Following a similar approach, a few years later, Krawczyk and colleagues reported an analogous reaction, in which the synthetic route starts with the reaction between cyclic sulfate 124 and ethyl diethoxyphosphorylacetate 125 , to afford phosphorated ester 126 as a single diastereoisomer . In order to perform the Curtius rearrangement, first the ester group needs to be hydrolyzed to form carboxylic acid 127 , and then the addition of diphenylphosphoryl azide (DPPA) affords isocyanate 128 , which is immediately captured as carbamate 129 by the in situ addition of ethanol. In addition, the benzyl and carbamateprotecting groups can be eliminated via hydrogenolysis and hydrolysis, respectively, affording enantiopure α-aminophosphonic acid derivative (1 R ,2 S )- 130 in excellent yield. For the preparation of the opposite enantiomer, (1 S ,2 R )- 130, the authors used a complementary strategy as depicted in . In this case, the synthesis of racemic lactone 132 is performed by treatment of epibromohydrin 131 with malonate derivative 125 in the presence of sodium hydride. Then, the reaction of lactone 132 with ( R )-1-phenylethylamine 40 gives products (1 S ,2 R ,1′ R )- 133 and (1 R ,2 S ,1′ R )- 133 as a mixture of diastereoisomers, which can be separated by column chromatography. Once pure (1 S ,2 R ,1′ R )- 133 is isolated, the hydrolysis of the amine in presence of sulfuric acid yields enantiomerically pure lactone (1 S ,5 R )- 134, which is then treated with saturated methanolic ammonia followed by an acylation reaction to obtain amide 135 . The lead tetraacetate-mediated Hoffmann rearrangement in tert -butyl alcohol gives carbamate 136 , which in presence of potassium carbonate yields N -Boc aminocyclopropane phosphonate 137 . Finally, the sequential treatment of 137 with TFA and bromotrimethylsilane affords α-aminophosphonic acid derivative (1 S ,2 R )- 130 in excellent yield . A simple strategy for the preparation of tetrasubstituted α-aminophosphonates is the functionalization of tetrasubstituted α-aminophosphonates, taking advantage of the acidic nature of the hydrogen atom adjacent to the phosphorus substituent. Using this approach, Seebach described in 1995 the first example of a stereoselective synthesis of tetrasubstituted α-aminophosphonic acids . As shown in , racemic imidazolidinone 2 is first obtained starting from glycine ester 1 , by formation of the amide derivative with dimethylamine and subsequent condensation of the amino group with pivalaldehyde. Then, a kinetic resolution using ( R )-mandelic acid 3 is carried out, allowing the isolation of diastereomeric salt ( R , R )- 4 , which is treated first with NaOH and then with Boc 2 O and DMAP to yield enantiomerically pure imidazolidinone 5 . Then bromide 6 is formed via radical halogenation, and the subsequent Arbuzov reaction with trimethylphosphite leads to a single isomer of tertiary α-aminophosphonate 7 in moderate yield . Next, in the key step, compound 7 is treated with a strong base and an alkyl, allyl or benzyl halide, leading to the formation of tetrasubstituted α-aminophosphonates 8 in good yields and excellent diastereoselectivities (72–83%, >98:2 dr). In all cases, the electrophile reagent approaches from the less hindered face of imidazolidinone ring, in an anti -addition. In order to obtain the acyclic α-aminophosphonic acid derivative 9 , the authors performed a reduction of the amide carbonyl in 8 and a hydrolysis of the resulting intermediate in aqueous HCl . In addition, the authors extended their methodology to the use of imidazolidines, instead of amides 7 , obtaining, after the addition of benzyl, methyl or allyl halides, cis tetrasubstituted α-aminophosphonates in moderate yields and diastereoselectivities (33–53%, 1.3:1–3.8:1 dr). However, the use of ethyl, propyl and butyl halides gave trans products in better yields and diastereomeric ratios (45–60%, 1:17 ≤ 1:50 dr). A similar procedure was developed by Davis for the synthesis of pyrrolidine-derived α-aminophosphonates 17 . In this case, sulfinyl imine 10 is attacked by the enolate derived from ethyl acetate, to obtain N -sulfinyl β-amino ester 11 in a diastereoselective fashion. Then, the addition of lithium methyl phosphonate leads to N -sulfinyl δ-amino β-ketophosphonate 12 in very good yield. Next, the sulfinyl protecting group is easily removed and replaced with a Boc group, obtaining amide 13 which, after treatment with 4-acetamidobenzenesulfonyl azide, provides the corresponding α-diazo derivative 14 . Finally, the treatment of α-diazophosphonate 14 in the presence of Rh 2 (OAc) 4 catalyst yields cis pyrrolidine phosphonate 15 , as the major diastereoisomer (68%, 81:19 dr) . Tertiary α-aminophosphonate 15 can be functionalized, with retention of the configuration, using a strong base and allyl bromide, providing tetrasubstituted α-aminophosphonate 16 . Although substrate 16 is obtained as a mixture of rotamers, the removal of N -Boc group renders pyrrolidin-3-one 17 as a single isomer. Additionally, acyclic α-amino α-ketophosphonate 18 can be also prepared after a ring-opening process via a Pd-catalyzed hydrogenation . Following the same principle, Amedjkouh described the synthesis of bicyclic α-aminophosphonate 24 . In this case, the synthetic methodology starts with the preparation of oxazolopyrrolidine phosphonate 23 from ( R )-phenylglycinol ( 19 ), benzotriazole ( 20 ), and 2,5-dimethoxytetrahydrofuran ( 21 ), obtaining enantiomerically pure oxazolopyrrolidine 22 , by formation of succinaldehyde from the hydrolysis of furan derivative 21 and subsequent multicomponent reaction with substrates 19 and 20 . Then, tertiary oxazolopyrrolidine phosphonate 23 is formed through Arbuzov reaction of oxazolopyrrolidine 22 and triethyl phosphite . The treatment of α-aminophosphonate 23 with butyllithium followed by the addition of alkyl halides results in the formation of tetrasubstituted α-aminophosphonates 24 with total retention of the configuration, and yields that are moderate to good, when using aliphatic halides (35–81%), but low if benzyl halide is used (10%). Finally, the elimination of the chiral auxiliary via catalytic hydrogenation affords optically pure phosphoproline derivative 25 . According to the authors, the high diastereoselectivity observed for this transformation is related to a proposed transition state TS1 . Thus, the lithium ion is coordinated with the phosphonate oxygen and the tertiary nitrogen atoms, forming a five-membered ring pseudocycle, where the σ* P=O acceptor orbital lies in parallel to the lone pair of the anion, which is therefore stabilized by hyperconjugation. Under this conformation, the functionalization with the alkyl group occurs with retention of configuration. Another possibility for the asymmetric formation of tetrasubstituted α-aminophosphonates implying C-C bond formation relies on the addition of carbon nucleophiles to chiral ketimines. In this context, our research group described in 2013 a methodology for the preparation of tetrasubstituted α-aminophosphonates through the addition of carbon nucleophiles to α-phosphorated ketimines 29 . First, Pudovic reaction of TADDOL-derived chiral phosphite 26 with imine 27 affords trisubstituted α-aminophosphonate 28 as a mixture of diastereoisomers (93%, 1:1 dr). α-Aminophosphonate 28 is then transformed into the enantiopure chiral ketimine 29 through a formal oxidation, consisting of an initial chlorination, followed by a β-elimination of hydrogen chloride using a polymeric base. Then, organometallic species react fast with α-iminophosphonate 29 , delivering tetrasubstituted α-aminophosphonates 30 in very good yields. Better results in terms of diastereoselectivity are obtained in this reaction when aliphatic Grignard reagents (R = Me, 80%, 94:6 dr) are used if compared to the aromatic substrate R = 2-naphthyl, 81%, 55:45 dr). Additionally, the hydrolysis of the tosyl and chiral auxiliary group affords tetrasubstituted α-aminophosphonic acid ( S )- 31 . In the proposed transition state, depicted in , the phosphorus-containing seven-membered ring adopts a more stable boat conformation, which is fixed by the trans configuration of the five-membered fused ring. Under this conformation, the two heteroatoms must embrace the more stable equatorial orientation, forcing the two hydrogen atoms to the axial positions. According to this model, the nucleophilic attack to the Re -face is substantially favored, due to the presence of the axial phenyl groups blocking the Si face. Menthol-derived phosphonate imines 32 are another kind of useful chiral electrophiles used in the diastereoselective preparation of tetrasubstituted α-aminophosphonates . For example, the use of proline ( I) as a chiral catalyst in the addition of acetone ( 33 ) gives α-aminophosphonate 34 in good yield and diastereoselectivity (86%, 1:30 dr). The authors also describe the addition of other nucleophiles to ketimines 32 , such as pyrroles 35 , indole ( 37 ), and nitromethane, to yield α-aminophosphonate derivatives 36 , 38 and 39 , although in these cases, they observe lower diastereoselectivities . The same research group used also chiral N -methylbenzylimines 41 in cycloaddition reactions for the asymmetric preparation of tetrasubstituted α-aminophosphonates 43 . In his case, the synthesis of chiral imine 41 comprises an initial treatment of ( S )-1-phenylethylamine (( S )- 40 ) with trifluoroacetic acid and triphenylphosphine, in the presence of trimethylamine in order to form haloimine 41 . Then, the Arbuzov reaction with triethyl phosphite, yields the corresponding fluorinated α-ketiminophosphonate 42 . Upon treatment with diazomethane, ketimine 42 undergoes a cycloaddition reaction that leads the formation of triazoline-derived α-aminophosphonate 43 in good yield but with moderate diastereoselectivity (83%, 2.5:1 dr). Although the authors did not assign the configuration of the major isomer, both can be separated through chromatography. Another common strategy for the preparation of tetrasubstituted α-iminophosphonates comprises the addition of phosphorus nucleophiles to ketimines. For example, Davis described in 2001 the use of p -toluensulfinyl imines 44 as starting materials for the diastereoselective preparation of tetrasubstituted α-aminophosphonates 45 . In this report, chiral ketimines 44 are treated with lithium diethyl phosphite at low temperature to obtain α-aminophosphonates 45 in excellent yields and diastereoselectivities (71–97%, 82:18–99:1 dr). In the last step, enantiomerically pure α-aminophosphonic acids 46 can be isolated by simple hydrolysis using hydrochloric acid. The high degree of diastereoselectivity in this reaction is explained by the authors as a transition state where there is a chelation of the lithium cation to both, the sulfinyl and phosphite oxygen atoms, in a seven-membered twisted-chair transition state. As shown in , TS3 is favored because, under this conformation, the bulky aryl group adopts an energetically favored equatorial position if compared to TS4 , where the aromatic substituent group must assume an energetically less favorable axial position. Following a similar methodology, the same research group describes also the preparation of tetrasubstituted phosphoproline derivative 50 . In this case, the treatment of oxo-sulfinimine 47 with lithium diethyl phosphite gives α-aminophosphonate 48 , which after treatment with hydrochloric acid, forms cyclic tetrasubstituted α-aminophosphonate 49 in good yield. Then, a syn addition of molecular hydrogen from the less hindered face yields phosphoproline derivative 50 with 50% enantiomeric excess. Based on the same principle, Yuan’s group used tert -butylsulfinyl imines 51 as chiral auxiliaries for the synthesis of tetrasubstituted α-aminophosphonates 52 , via nucleophilic addition of phosphites . Substrates 52 are obtained with high yields and diastereoselectivities in all cases (70–85%, 86:14 ≥ 98:2 dr), using different alkyl phosphites (R 2 = Me, Et, n -Pr) and several alkyl or aromatic substituents in sulfinimine 51 . In an identical way as described by Davis, substrates 52 can be transformed into α-aminophosphonic acids 53 by a simple hydrolysis. For this transformation, the authors propose the plausible transition state TS5 , where the potassium cation is chelated to the sulfinyl and phosphonate oxygens, and the nucleophilic attack of the phosphite nucleophile occurs from the less hindered face, opposite to the tert -butyl group. A few years later, Ellman’s research group described a modification of this reaction, using potassium bis(trimethylsilyl)amide (KHMDS), which favors the solubility obtaining in this way α-aminophosphonates 52 with better reaction conversions and in excellent yields but lower diastereoselectivities (88–95%, 12:1–99:1 dr) . In addition, Yuan’s group used this strategy, employing chloro-substituted sulfinyl imines 54 for the preparation of three-, four- and five-membered cyclic tetrasubstituted α-aminophosphonates 56 . The reaction proceeds efficiently using different imines 54 (R = Me, Ph; n = 1, 2, 3) and heterocyclic substrates 56 can be obtained through intermediate 55 in good yields and diastereoselectivities (75–83%, 71:29–92:8 dr). In this transformation, a strong dependence of the diasteroselectivity on the size of the cycle is observed, obtaining an excellent dr (92:8) for a three-membered cyclic substrate, while a drop into the diastereoselectivity is observed for the four-membered derivative (89:11 dr) and even lower dr values are obtained for five-membered heterocycles (71:29 dr). Likewise, in 2014, Liu and colleagues extended this strategy to the nucleophilic addition of diphenyl phosphite to fluorine-substituted α,β-unsaturated sulfinimines 57 , in this case in the presence of a rubidium catalyst . Allyl α-aminophosphonates 58 are obtained in good yields and diastereoselectivities (56–87%, 75:25–92:8 dr) with different fluoroalkyl substituents. In addition, the selective deprotection of the sulfinyl group in acidic media, to produce α-aminophosphonate 59 in good yield, is described. In 2017, Cramer reported an additional example of an asymmetric addition of phosphorus nucleophiles to imines for the preparation of tetrasubstituted α-aminophosphonates . In this work, first chiral imine 61 is obtained in good yield and enantiomeric excess (90%, 97% ee) from imidoyl chloride 60 in the presence of a palladium, catalyst II and CsOAc. Then, via a boron trifluoride-mediated hydrophosphonylation reaction of imine 61 , α-aminophosphonate 62 is formed in good yield as a single diastereoisomer. A related strategy for the asymmetric induction in the preparation of tetrasubstituted α-aminophosphonates consists of the use of acetal-derived iminium salts as electrophiles. In 2000, Fadel and colleagues detailed a one-pot synthesis of cyclopropane α-aminophosphonates 65 (R 1 = Me) using this methodology . In this example, the sequence starts with the cyclization of bromoester 63 (R 1 = Me) in the presence of sodium and TMSCl, to obtain silylated acetal 64 . Then, deprotected hemiacetal intermediate 66 is formed by an alcoholysis in presence of a catalytic amount of an acid source (TMSCl or AcOH), followed by the reaction with ( S )-1-phenylethylamine ((S)- 40 ), to furnish α-amino alcohols 67 . Under acidic conditions, intermediate 67 is converted into iminium species 68 , which undergo a nucleophilic addition of phosphite (R 2 = Me, Et) from the less hindered face of the C=N bond, to provide finally diastereoisomeric phosphonates 65 (R 1 = Me) in good yields and diastereoselectivities (60–82%, 80:20–88:12 dr). A few years later, the authors extended the scope of this reaction to differently substituted acetals 64 (R 1 = Et, Bn, i Pr, t Bu), obtaining cyclopropane-derived α-aminophosphonates 65 in good yields and excellent diastereoselectivities (56–78%, 76:24–100:1 dr). Remarkably, the use of a tert -butyl substituent provided a single diastereoisomer (100:1 dr) . Following the same approach, Faldel’s group described later the synthesis of spirocyclic α-aminophosphonates 70 . Starting also from silylated acetal 64 , deprotected acetal intermediate 66 is again formed by an alcoholysis and then, iminium species 72 is obtained by reaction with ( R )-phenylglycinol under acidic conditions. The subsequent nucleophilic addition of triethyl phosphite gives spirophosphonate 69 , by means of an intramolecular transesterification, with good yield and diastereoselectivities (71%, 89:11 dr). However, the use of norephedrine as a chiral source results in a drop in both yield and diastereomeric ratio (36%, 78:22 dr). It must be pointed out that substrates 69 are obtained as a mixture of epimers (80:20), due to the presence of an additional chiral center at the phosphorus atom. The major diastereoisomer can be isolated and further transformed into enantiopure cyclopropane-derived aminophosphonic acid 70 , by an initial hydrogenolysis reaction, followed by hydrolysis of phosphonate group. In 2007, Fadel described also a similar process, in this case using heterocyclic iminium salts . In this approach, N -Boc-protected piperidinone 73 is treated with ( S )-1-phenylethylamine (( S )- 40 ) in presence of acetic acid, followed by the nucleophilic addition of triethyl phosphite, leading to the formation of tetrasubstituted α-aminophosphonates 74 and 75 as an inseparable mixture of diastereoisomers (75%, 60:40 dr). Then, after the hydrolysis of the N -Boc protecting group, diastereoisomers 76 and 77 are formed, which in this case can be separated. In addition, the hydrogenolysis and hydrolysis reactions of each diastereoisomer gives enantiopure α-aminophosphonic acids 78 and 79 . Along the same line, the same research group described the preparation of bicyclic tetrasubstituted α-aminophosphonates 84 starting from ketone acetals 80 . The esterification reaction of acetals 80 with ( S )-phenylalanine derivative 81 , which acts as chiral auxiliary, leads to intermediate 82 , which is cyclized to form imine substrates 83 as a mixture of diastereoisomers. The formation of iminium cation in the presence of triethyl phosphite gives bicyclic tetrasubstituted α-aminophosphonates 84 in good yields and excellent diastereoselectivities (46–77%, 89:11–99:1 dr). These α-aminophosphonates are transformed into cyclic serine analogues 87 through their oxidation, and further hydrolysis of the intermediate imine 85 /enamine 86 mixture. In this case, the authors justify the high level of diastereoselectivity by an equilibrium between two iminium epimers in TS6 and TS7 through the parent enamine salt . It is estimated that the chair–boat conformation ( TS6 ) is favored in 2.14–5.56 kcal/mol relative to the epimeric twist boat–boat conformation ( TS7 ), resulting in the kinetic addition of the phosphite to the less hindered Si -face of the iminium species in TS6 . In addition, the use of bicyclic iminium salts has been reported for the asymmetric preparation of cyclic α-aminophosphonates 90 . Chiral cyclic imines 89 are synthesized from diamine 88 and ketoesters and their subsequent treatment in toluene with dialkyl phosphites gives tetrasubstituted α-aminophosphonates 90 in high yields and diastereoselectivities (78–95%, 68:32–98:2 dr) . However, if imines 89 are activated with bromotrimethylsilane, they are supposed to form an iminium ion, which is reactive towards tris(trimethylsilyl) phosphite and then, α-aminophosphonic acid derivatives 91 can be obtained in high yields and diastereoselectivities (70–99%, 85:15–98:2 dr) . Other useful strategy for the diastereoselective synthesis of tetrasubstituted α-aminophosphonates with C-P bond formation, complementary to the hydrophosphonylation of chiral imines, is the addition of chiral phosphorus nucleophiles to activated ketimines. For example, in 2011, Chen and Miao used a multicomponent Kabachnik–Fields reaction of phosphorylated chiral nucleophile 92 , diethyl phosphoramidate 93 and ketone 94 to obtain α-aminophosphonate 95 in a diastereoselective fashion . The authors propose that the nucleophilic addition in TS9 is expected to be less favored, compared to the addition proposed in TS8 , where the chiral dioxaphospholanedicarboxylate 92 , which plays a crucial role in the control of the diastereoselectivity, reacts from the sterically less hindered face. A different methodology for the asymmetric formation of tetrasubstituted α-aminophosphonates that involves C-P bond formation was reported by Hammershmidt, where a phosphoramidate-α-aminophosphonate rearrangement is described, leading to the formation of diverse α-aminophosphonates 98 in moderate to good yields and excellent stereocontrol (38–80%, 96–99% ee) . This route involves N -Boc protection of phosphoramides 96 , and metalation with sec -butyllithium to form the corresponding carbanion 99 . The rearrangement of the phosphorous substituent and the final quenching with acetic acid provides tetrasubstituted α-aminophosphonates 98 . The introduction of nitrogen reagents into the skeleton of phosphonates is also an alternative methodology that can be useful for the preparation of tetrasubstituted α-aminophosphonates. In this regard, in 1999 Davis and colleagues applied successfully the aza-Darzens reaction for this purpose . Starting from chiral sulfinyl imine 101 and diethyl 1-chloroethylphosphonate ( 102 ), initially, a mixture of three isomers of α-chloro-β-amino adducts 103 – 105 is obtained. The major isomer 103 can be isolated after chromatography and then, in the presence of sodium hydride, enantiomerically pure aziridine 106 is obtained. After the elimination of the chiral auxiliary group with TFA, followed by ring-opening via hydrogenolysis, enantiopure tetrasubstitued α-aminophosphonate ( R )- 107 is obtained. For the preparation of the opposite enantiomer, the side mixture of α-chloro-β-amino adducts 104 and 105 is used. After the hydrolysis of the chiral auxiliary group and the subsequent ring-opening of the corresponding aziridine intermediate, tetrasubstitued α-aminophosphonate ( S )- 107 is obtained. Continuing with strategies that entail C-N bond formation, Curtius rearrangement is also a useful method for the introduction of amino groups starting from carboxylic acids, which can be used for the preparation of tetrasubstitued α-aminophosphonates. For example, in 1999, Le Corre used enantiomerically pure chiral sulfate 108 and phosphorylated malonate derivative 109 for the preparation of cyclopropane phosphonate 110 . Then, the ester group is hydrolyzed, to obtain carboxylic acid substituted phosphonate 111 which, after activation of the acid with thionyl chloride and the addition of sodium azide, leads to acyl azide species 112 / 113 . At this point, Curtius rearrangement gives isocyanate 114 , which is captured by means of the in situ addition of benzyl alcohol, to afford N -protected amino ester 115 . Finally, the benzyl protecting group is hydrolyzed yielding cyclopropane-derived tetrasubstituted α-aminophosphonate 116 . Ito’s group also used Curtius rearrangement for the preparation of tetrasubstitued α-aminophosphonate 123 . The rhodium-catalyzed conjugate addition of cyanophosphonate 117 to acrolein ( 118 ) leads to the formation of aldehyde 119 with high yield and enantiomeric excess (80%, 92% ee). Compound 119 is then treated with phosphonium ylide 120 , and the newly formed C=C bond, prepared through the Wittig olefination, is directly hydrogenated to obtain cyanophosphoate 121 . The acidic hydrolysis of the nitrile moiety in this substrate followed by an in situ esterification of the carboxylic acid intermediate with diazomethane affords phophorated methyl ester 122 . Finally, the methoxycarbonyl group in 122 is selectively hydrolyzed under basic conditions, and the resulting carboxylate treated with diphenyl phosphoroazidate, which by means of a Curtius rearrangement followed by trapping with benzyl alcohol, affords tetrasubstitued α-aminophosphonate 123 (81%, 88% ee). Following a similar approach, a few years later, Krawczyk and colleagues reported an analogous reaction, in which the synthetic route starts with the reaction between cyclic sulfate 124 and ethyl diethoxyphosphorylacetate 125 , to afford phosphorated ester 126 as a single diastereoisomer . In order to perform the Curtius rearrangement, first the ester group needs to be hydrolyzed to form carboxylic acid 127 , and then the addition of diphenylphosphoryl azide (DPPA) affords isocyanate 128 , which is immediately captured as carbamate 129 by the in situ addition of ethanol. In addition, the benzyl and carbamateprotecting groups can be eliminated via hydrogenolysis and hydrolysis, respectively, affording enantiopure α-aminophosphonic acid derivative (1 R ,2 S )- 130 in excellent yield. For the preparation of the opposite enantiomer, (1 S ,2 R )- 130, the authors used a complementary strategy as depicted in . In this case, the synthesis of racemic lactone 132 is performed by treatment of epibromohydrin 131 with malonate derivative 125 in the presence of sodium hydride. Then, the reaction of lactone 132 with ( R )-1-phenylethylamine 40 gives products (1 S ,2 R ,1′ R )- 133 and (1 R ,2 S ,1′ R )- 133 as a mixture of diastereoisomers, which can be separated by column chromatography. Once pure (1 S ,2 R ,1′ R )- 133 is isolated, the hydrolysis of the amine in presence of sulfuric acid yields enantiomerically pure lactone (1 S ,5 R )- 134, which is then treated with saturated methanolic ammonia followed by an acylation reaction to obtain amide 135 . The lead tetraacetate-mediated Hoffmann rearrangement in tert -butyl alcohol gives carbamate 136 , which in presence of potassium carbonate yields N -Boc aminocyclopropane phosphonate 137 . Finally, the sequential treatment of 137 with TFA and bromotrimethylsilane affords α-aminophosphonic acid derivative (1 S ,2 R )- 130 in excellent yield . 3.1. C-C Bond Formation The first example of an enantioselective synthesis of tetrasubstituted chiral α-amino phosphonates was reported in 1999 by Ito’s group . The reaction consists of an asymmetric palladium– IV -catalyzed allylation of racemic β-keto-α-aminophosphonates 138 that allows the obtaining of optically active α-amino phosphonates 140 in moderate to good yields and enantiocontrol (27–87%, 46–88% ee). In addition, the authors also report the subsequent diastereoselective reduction of the ketone moiety, affording β-hydroxy-α-amino phosphonates 141 . Remarkably, when the reaction is carried out in methanol at low temperature, using sodium or tetra n -butylammonium borohydrides as reducing reagents, the 141 -syn isomer is obtained (74–89%, 74:26 to 82:18 syn / anti ratio). In contrast, the reaction in tert -butyl alcohol at 50 °C, using sodium borohydride, affords the opposite isomer (78%, 15:85 syn / anti ratio). Due to their strong nucleophilic character, the use of nitroalkane enolates has been widely extended in organic chemistry in the functionalization of aldehydes or imines since the Henry reaction was reported in 1895. Several examples regarding the enantioselective nucleophilic addition of α-nitrophosphonates to different electrophile reagents for the synthesis of α-aminophosphonates have been reported in recent years. In this context, the first example of this reaction was reported in 2008 by Johnston and colleagues . The reaction consists of a Brönsted acid V -catalyzed addition of trisubstituted nitrophosphonates 142 to N -Boc aldimines 143 , which leads to chiral phosphonates 144 in yields ranging from 48% to 86% and with high stereocontrol (up to 94:6 dr, up to 99% ee). Next, the reduction and hydrolysis of α-nitrophosphonate 144 (Ar = p -PhOC 6 H 4 ) under acidic conditions results in the formation of enantioenriched α,β-diaminophosphonate 145 in 84% yield with retention of the absolute configuration. According to the authors, the chiral Brönsted acid catalyst activates both the imine and nitro phosphonate substrates through hydrogen bonding. Since both nitro and phosphoryl groups may be activated by acid catalysts, a bulky phosphonate is selected in order to minimize the activation of the phosphoryl group enhancing the steric repulsion. The nitro group is, in this way, located close to the catalyst and the 144 - anti product is favored through transition state TS10 , while the high bulkiness of the phosphonate moiety prevents the formation of TS11 and 144 - syn isomer is obtained as the minor product. As a continuation of this work, Namboothiri and colleagues reported in 2012 the conjugate addition of α-nitrophosphonates 146 to α,β-unsaturated ketones 147 uing in this case cinchona alkaloid-derived thiourea VI as a bifunctional catalyst . For this transformation, a tentative model of addition is proposed by the authors, where both acidic protons of the thiourea moiety activate ketones by means of hydrogen bonding, while the basic nitrogen of the quinuclidine unit stabilizes the nitrophosphonate anion ( TS12-13 ). Although the reaction affords optically active α-nitrophosphonates 148 in high yields (70–97%), the enantioselectivity of the reaction is found to be strongly dependent on the substituent of the ketone. Thus, when aromatic groups bearing electron-donating substituents are used, enantioselectivities ranging from 70% to 94% are obtained. In contrast, the use of some electron poor aromatic groups, such as 4-nitrophenyl, heteroaryl substituents such as 2-furyl or 2-thienyl, and aliphatic cyclohexyl substituents results in a drastic decrease into the enantioselectivity (35–44%). Following their interest in this reaction, a few years later, the same authors proposed that the use of stronger acidic squaramide catalyst VII in the reaction improves the poor enantioselectivity when electron-withdrawing groups are used . A relevant improvement not only on the stereocontrol but also on the reactivity was reported. Thus, yields above 90% and enantioselectivities ranging from 85% to 99% for all the aromatic and heteroaromatic ketones except for 2-substituted aryl groups such as 2-Cl-C 6 H 4 (97%, 74% ee) and 1-naphthyl (98%, 15% ee) are obtained. In contrast, cyclohexyl substituted enone only affords moderate yield and enantioselectivity (70%, 51% ee). In addition, the authors reported several useful transformations of the obtained optically active α-nitrophosphonates into α-aminophosphonates . For instance, the reduction of the nitro group in the presence of zinc and ammonium chloride results in the unprotected amino group that spontaneously leads to the formation of cyclic imine 149 in 86% yield. On the other hand, the Baeyer–Villiger oxidation allows obtaining of the corresponding ester 150 in almost quantitative yield. This ester 150 can be subsequently used for the synthesis of cyclic lactam 151 after reduction of the nitro group and subsequent intramolecular lactamization reaction . Moreover, the reaction of ester 150 with a primary amine to yield the acyclic amide 152 , followed by the Clemmensen reduction of the nitro group affords the acyclic α-aminophosphonate derivative 153 . Following the same line, in 2013, Jászay and colleagues reported the addition of α-nitrophosphonates 154 to aryl acrylates 155 using also an squaramide organocatalyst VIII . Even though some bulky phosphonates were tested in the reaction with phenyl acrylate, the use of iso -propyl and butyl phosphonates does not result in a further improvement on the reaction yield or enantiocontrol (82–85%, 52–64% ee) and the best enantiomeric excesses are obtained with ethyl phosphonates (93%, 76% ee). Concerning aryl acrylate substrates, the best enantioselectivities are obtained for electron-donating aryl groups (e.g., 2,6-(OMe) 2 C 6 H 3 , 92%, 96% ee). In contrast, although no relevant effect is observed on the reaction yield, the use of strongly electron-withdrawing aromatic rings such as 2-NO 2 C 6 H 4 results in a lower stereocontrol (90%, 40% ee). Besides, the reduction of the nitro group results in a mixture of phosphorus substituted γ-lactam 151 and cyclic α-iminophosphonate 157 in variable ratios depending on the reaction pressure and the aryl groups present on the ester moiety. Other Michael acceptors different than conjugated carbonyl compounds have been used in enantioselective reactions with α-nitrophosphonates. Specifically, in 2012 Mukherjee’s group reported the use of thiourea–alkaloid bifunctional catalyst IX for the addition of α-nitrophosphonates 146 to conjugated nitroalkenes 158 to provide tetrasubstituted α-nitrophosphonates 159 , in yields ranging from 64% to 82% when using both, aryl and alkyl groups on the nitroalkene . Curiously, when 2-naphthyl nitroalkene is used, the corresponding α-nitrophosphonate 159 is formed in only 38% yield. Nevertheless, diastereomeric ratios ranging from 83:17 to 95:5 and enantiomeric excesses up to >99% are obtained in all cases. The concomitant reduction of both nitro groups results in an intramolecular cyclization reaction, which leads to chiral pirazolidine 160 in 60% yield with no loss of the optical purity. In 2013, the use of vinyl sulfones as Michael acceptors in the addition of α-nitrophosphonates 146 was simultaneously reported by Namboothiri and Lu . Namboothiri proposes an enhancement of the electrophilicity of vinyl sulfone substrates through the establishment of two hydrogen bonds between the squaramide catalyst X acidic protons and the sulfone oxygen atoms, while the basic nitrogen of the alkaloid moiety activates the α-nitrophosphonate as a nucleophile ( , TS14 ). The reaction products are obtained in this case in excellent yield and enantiocontrol when aryl sulfones 161 are used (85–99%, 90–98% ee). In contrast, the use of tetrazole-derived sulfone results in a decrease in the enantiomeric excesses (74–79% ee). Moreover, the reduction of the nitro group affords α-aminophosphonate 163 in almost quantitative yield (95%). Slightly lower yields but similar enantiomeric excesses were obtained by Lu and colleagues by using thiourea catalyst XI (50–98%, 86–95% ee), obtaining, in this case, the opposite enantiomer ( S )- 162 . Analogously to α-nitrophosphonates, α-isothiocyanatophosphonates 164 possess a significantly acidic proton in the alpha position, which makes them suitable to be used as nucleophiles. In this regard, Yuan and colleagues reported in 2013 the enantioselective addition of α-isothiocyanatophosphonates 164 to aldehydes 165 catalyzed by bifunctional thiourea catalyst IX . In this case, the initial nucleophilic addition to aldehydes 165 proceeds in a similar manner as in the case of α-nitrophosphonates, through the activation of aldehyde electrophile 165 by the thiourea acidic protons and a simultaneous deprotonation of α-isothiocyanatophosphonates 164 by the quinuclidine basic unit of the catalyst ( TS15 ). Due to the electrophilic character of the central carbon in isothiocyanates, a subsequent intramolecular addition of the alcohol occurs ( TS16 ), leading to the formation of cyclic α-aminophosphonates 166 in moderate to good yield and diastereocontrol (36–93%, 84:16 to >99:1 dr). However, only moderate enantiomeric excesses are obtained (68–81% ee) if aromatic aldehydes are used. Besides, the use of acetaldehyde as electrophile results in a dramatic drop into the enantioselectivity (66%, 85:15 dr, 55% ee). One year later, Wang and colleagues reported an improvement in Yuan’s work using in this case squaramide catalyst XII . In this reaction, they obtain cyclic α-aminophosphonates 169 in excellent yield and stereocontrol using aldehydes 165 bearing not only electron-donating and electron-withdrawing groups, but also heteroaryl (2-furyl, 2-thienyl) aldehydes or conjugated cinnamaldehyde (86–99%, 94:6 to >95:5 dr, 87 ≥ 99% ee). Moreover, they also extended this methodology to N -tosyl aldimines 168 , cyclic thioureas 170 with similar results (80–99%, 86:14 to 92:8 dr, 92 ≥ 99% ee). More recently, Albrecht and colleagues described the synthesis of tetrasubstituted spirocyclic chiral α-aminoesters and α-aminophosphonates 173 through a conjugate addition of α-isocyanates 171 to conjugated barbiturates 172 in the presence of squaramide catalyst X . Although the scope is limited, spirocyclic α-aminophosphonates 173 are obtained in high yields and stereocontrol (60–99%, 88:12 to >95:5 dr, 92–98% ee). So far, the reactions described in this chapter, giving an account of reactions where the key step implies the formation of a C-C bond, entail the addition of an α-aminophosphonate equivalent onto an electrophile ( a; vide supra). A complementary general method involving the generation of C-C bonds that also leads to tetrasubstituted α-aminophosphonates consists of the addition of carbon nucleophiles to α-iminophosphonates ( b; vide supra) . The synthesis of activated α-ketiminophosphonate substrates is known to be a challenging task, mainly due to the low reactivity found in the typical amine-carbonyl condensation reactions, where deactivated amide substrates are required, and the intrinsic tendency of α-ketophosphonates to eliminate the phosphonate substituent, which leads to acylation reactions . Moreover, the high moisture sensitivity of such substrates entails additional obstacles for the purification of the imines which very often have to be prepared in situ. For this reason, it was not until 2012 when our research group reported an efficient synthesis of α-ketiminophosphonates 174 and the enantioselective addition of nucleophiles to such substrates . In this reaction, the cinchonidine ( XIII )-catalyzed nucleophilic addition of cyanide to α-phosphorated ketimines 174 provides optically active α-cyano α-aminophosphonates 176 in high yield (75–80%) and enantioselectivities ranging from 73% to 92%. The presence of bulky isopropyl groups was found to be crucial in order to obtain high enantiocontrol if compared with other alkyl and aryl phosphonates. The fact that in alcoholic solvents, the reaction proceeds fast but with no enantiocontrol might indicate a crucial role for the hydroxyl group of cinchona alkaloid in the transition state TS17 , which may activate the substrate via hydrogen bonding with the iminic nitrogen. In addition, optically active α-aminophosphonic acid 177 was obtained in 80% yield without racemization by the hydrolysis reaction of the cyano group under strong acidic conditions. As a continuation of our research on the enantioselective nucleophilic addition reactions to α-ketiminophosphonates, a few years later, we reported the asymmetric aza-Henry reaction with ketimines 178 , using bifunctional thiourea-alkaloid a catalyst XIV . The reaction allows the use of electron-donating and electron-withdrawing aromatic groups at the imine substrates with no relevant differences in the yield or the enantioselectivity of the obtained α-amino-β-nitrophosphonates 180 (82–87%, 80–84% ee). In addition, the reduction of the nitro group is also reported, leading to α,β-diaminophosphonate 181 in almost quantitative yield. In the same context, more recently, we described the first example of an enantioselective aza-Reformatsky reaction with non-cyclic ketimines 178 , using dialkyl zinc reagents and BINOL-derived chiral ligand XV . The presence of molecular oxygen is crucial in this case in order to obtain a high yield, since other byproducts are observed when an inert atmosphere is used. The reaction can be successfully generalized to several aryl and heteroaryl ketimines 178 and alkyl iodoacetates 182 , affording tetrasubstituted α-aminophosphonates 183 in excellent yield and enantiocontrol (76–92%, 93–>99% ee). Furthermore, the synthesis of β-lactam 185 containing a tetrasubstituted α-aminophosphonate is also described by a selective deprotection of the ester group and subsequent lactamization reaction. Although enantioselective nucleophilic additions to α-alkyl iminophosphonates remains almost unexplored, in 2014, a particular case using α-trifluoromethyl α-iminophosphonates was reported by Onys’ko and colleagues . In particular, proline ( I )-catalyzed nucleophilic addition of acetone ( 33 ) to N -unprotected α-iminophosphonate 186 yields tetrasubstituted α-iminophosphonate 187 in high yield and enantiocontrol (80%, 90% ee). Moreover, some further transformations of α-aminophosphonate 187 are reported by the authors. For instance, the reaction of substrate 187 with 2,5-dimethoxyfuran in acid media leads to N -heterocyclic derivative 189 in 84% yield through an aldol reaction of the in situ generated pyrrole 188 . On the other hand, the reaction with aryl isocyanates leads to pyrimidine 191 via urea intermediate 190 . As in the previous case, an intramolecular condensation involving the ketone moiety affords substrate 191 in 89% yield without racemization. Following a similar approach, Ohshima and colleagues reported some examples of a rhodium complex XVI -catalyzed enantioselective alkynylation of α-CF 3 α-iminophosphonates 192 . Once the alkyne 193 is inserted into the catalyst by displacement of TMS substituted alkyne ( 195 ), an enantioselective alkynylation of imine 192 takes place leading to amide–rhodium complex 196 , where a new chiral center is formed. Then, a new insertion occurs by the introduction of a second unit of alkyne 193 , leading to the formation of rhodium complex 197 . Finally, amide deprotonation of the terminal alkyne ends with the formation of α-alkynyl α-iminophosphonates 194 and the consequent regeneration of the active catalyst 195 . The use of aryl and cyclopropyl provides α-aminophosphonates 194 in excellent yields and enantioselectivity (86–99%, 80–93% ee). Following a similar approach, in 2012, Che’s group reported the use of chiral rhodium catalysts XVII and XVIII on the enantioselective multicomponent reaction of diazophosphonates 198 , anilines 199 and aromatic aldehydes 165 . Here, in the first reaction step, the rhodium catalyst reacts with diazo compound 198 to form rhodium carbene species 201 with the release of nitrogen gas. The subsequent insertion of aniline moiety leads to an ionic intermediate 202 , which easily evolves to zwiterionic species 203 . At this point, the rhodium-phosphonate undergoes an addition reaction to the corresponding aldehyde substrate 165 , leading to optically active tetrasubstituted α-amino-β-hydroxy phosphonates 200 while the catalyst unit is released for a new catalytic cycle. In this reaction, tetrasubstituted α-aminophosphonates 200 are obtained in moderate to excellent yield and stereocontrol (56–86%, 76:24 to 94:6 dr, 60–98% ee). During the last lustrum, a new family of cyclic α-ketiminophosphonates 204 has been used as electrophilic sources in enantioselective nucleophilic addition reactions. In particular, the palladium– XIX -catalyzed enantioselective arylation reaction of α-iminophosphonates 204 was reported in 2016 by Zhou’s group . The reaction can be generalized to several α-iminophosphonate substrates 204 and boronic acids 205 , bearing electron-donating and electron-withdrawing aryl groups. In consistence with other reported examples, the use of bulky iso -propyl phosphonates results in higher enantiocontrol, providing cyclic α-aminophosphonates 206 in excellent yields (73–97%) and enantioselectivities above 99%. The high steric bulkiness of the phosphonate moiety induces the coordination of the catalyst to the imine group through Si -face ( 207 ) since the coordination through Re -face ( 208 ) implies steric repulsions not only between phosphine and phosphonate moieties but also between tert -butyl group at the oxazoline ring and sulfonyl protecting group at the imine group. One year later, the enantioselective Friedel–Crafts reaction of indoles 210 to five-membered imines 209 was also reported . In this case, phosphoric acid XX was selected as the optimal catalyst, affording optically active α-aminophosphonate functionalized indoles 211 . The reaction can be successfully generalized to several indole substrates bearing electron-donating and electron-withdrawing groups with excellent yields and enantiocontrol (85–98%, 87–98% ee). However, the use of 2-methylindole results in a drastic drop in the enantioselectivity (91%, 59% ee). In addition, the addition of simple pyrrole to imines 209 , leads to the formation of the analogous α-aminophosphonate substituted pyrroles in 98% yield with 84% enantiomeric excess. In the same context, Zhang and colleagues described in 2018 a single example on the enantioselective Mannich-type addition of glycine Schiff bases 213 to five-membered iminophosphonates 212 , providing tetrasubstituted α-aminophosphonate 214 in moderate yield and stereocontrol (48%, 80:20 dr, 83% ee) . In the same year, Ma and colleagues reported the enantioselective decarboxylative addition of β-keto acids 215 to cyclic α-iminophosphonates 204 . The reaction affords α-amino-β-ketophosphonates 216 when five- or six-membered cyclic imines 204 are used as electrophile substrates. In addition, several alkyl and (hetero)aryl keto acids were tested, obtaining in all cases excellent yields and enantiocontrol (77–93%, 90–99% ee), and allowing a decrease in the catalyst loading down to 1% without any lose in the enantioselectivity. Likewise, the reduction of the ketone to obtain chiral alcohol 217 (88%, 94:6 dr) and the synthesis of aziridine 218 in presence of tert -butyl hydroperoxide (61%, single diastereoisomer) are described with high yields and diastereocontrol. Besides reactions that imply C-C bond formation, through the functionalization of trisubstituted α-aminophosphonate derivatives with electrophiles or by the addition of nucleophiles to α-iminophosphonates, cycloaddition reactions are also efficient protocols leading to the formation of optically active tetrasubstituted α-aminophosphonate derivatives. The first example of such reaction was published in 2011 by Kobayashi . In particular, they reported a [3+2] reaction between Schiff bases 219 and tert -butyl acrylate 220 , using silver hexamethyldisilazane as a catalyst and chiral bisphosphine ligand XXIII . The reaction can be successfully generalized to several chiral pyrrolidines 221 showing a tetrasubstituted α-aminophosphonate moiety in excellent yields and stereocontrol (72–81%, >99:<1 dr, 90–98% ee). According to the authors, the in situ formed silver– XXIII complex catalyzes the enolization of the phosphoryl group, leading to the active reagent for the exo -[3+2] cyclization through transition state TS18 . More recently, an example of a dipolar cycloaddition was reported by Peng and colleagues . The reaction between diazophosphonates 222 and acryloyl oxazolidones 223 in presence of Mg– XXIV complex afforded enantioenriched pyrazolines 224 in high yield and enantiocontrol. For this reaction, the authors propose an activation of the electrophile through a double coordination of the chiral Mg-complex to both carbonyl groups of acryloyl oxazolidones 223 , inducing the preliminary addition of the nucleophile from the Re -face in TS19 . The subsequent trapping of the in situ formed enolate by the nitrogen atom of the diazo compound leads to cyclic α-aminophosphonates 224 in moderate to excellent yield and enantiocontrol (52–93%, 74–95% ee). In addition, the double reduction of the pyrazoline and oxazolidone moieties, followed by treatment with carbonyl di-imidazole (CDI), leads to bicyclic pyrazolidine 225 in 84% yield. On the other hand, Boc-protected pyrazoline 226 can be also obtained in high yield (86%) by treatment with terc -butyl dicarbonate. Next, the selective deprotection of oxazolidone moiety by a reduction reaction with sodium borohydride, and the subsequent protection of the resulting alcohol affords pyrazoline 227 in 63% yield. Finally, the reduction of pyrazoline, and the in situ Cbz-protection of the newly formed NH group leads to pyrazolidine 228 in 86% yield. Moreover, although not strictly a cycloaddition reaction, in 2018, Chi’s group reported the use of five- and six-membered cyclic α-iminophosphonates 229 in a N -heterocyclic carbene XXV -catalyzed formal [4+2] cycloaddition with α,β-unsaturated aldehydes 230 . In the catalytic cycle proposed by the authors, after the activation of carbene catalyst XXV* , the reaction starts with an addition of the carbine to aldehyde substrate to generate intermediate 233 . Then, a stoichiometric amount of an oxidant reagent is needed to re-generate the carbonyl group in species 234 . Next, a basic source generates enolate 235 and the enantioselective vinylogous addition to the imine takes place giving rise to adduct 236 . The cycle ends with an intramolecular addition of the nitrogen atom to the carbonyl moiety, which leads to cyclic α-aminophosphonates 231 or 232 , after releasing the active carbene catalyst XXV* . The reaction products were obtained in moderate to excellent yield (51–96%) and enantioselectivities above 92%. Remarkably, simultaneously to Chi’s work, Ye and colleagues reported the same transformation in similar reaction conditions with a different NHC catalyst, resulting in the formation of the opposite enantiomer of α-aminophosphonates 231 and 232 . 3.2. C-P Bond Formation One of the simplest strategies for the preparation of α-aminophosphonates is the addition of phosphorus nucleophiles to imines. Although the first catalytic asymmetric hydrophosphonylation of aldimines was described by Shibasaki in 1995 , it is easy to understand the further complication of developing a catalytic system that works on ketimines, due to the more difficult discrimination between both faces in the prochiral species if compared to aldimines . Therefore, it was not until 2009 when Nakamura’s group described the first nucleophilic addition of diphenyl phosphite to N -sulfonyl ketimines 237 catalyzed by cinchona alkaloids XXVI and XXVII . In the presence of a base and 2% of alkaloid XXVI, tetrasubstituted α-aminophosphonates 238 in excellent yields (93–99%) are obtained, with moderate to excellent enantiomeric excesses (55–97% ee). The use of hydroquinidine XXVII epimer as organocatalyst results equally in the formation of α-aminophosphonates 238 with the opposite configuration, in yields up to 99% with good stereocontrol (52–95% ee). Since it was evidenced that the reaction does not work without the presence of a base, the transition state TS20 proposed by the authors might consist in a coordination between the alkaloid nitrogen with the sodium cation that would improve the nucleophilic character of the phosphite reagent. Furthermore, the use of an alkaloid with a protected alcohol group results in a drop in the enantioselectivity, which may indicate a dual activation mode of the alkaloid in which the hydroxyl group activates the imine 237 by hydrogen bonding. A few years later, Shibasaki reported the hydrophosphorylation reaction of N -thiophosphinyl imines 239 with various phosphites, using copper complexes and chiral bis-phosphine-based ligands as catalysts, thus obtaining tetrasubstituted α-aminophosphonates 240 with excellent enantioselectivities ranging between 86% and 97% . The reaction works very efficiently even using only 0.5–2% copper– XXVIII catalyst, which can also be reused. Another approach that can be used for the hydrophosphorylation reaction of α-phosphorated ketimines consists in the use of N -phosphinyl imines 241 in the presence of a bifunctional thiourea–iminophosphorane catalyst XXIX . This type of catalyst has a superbase iminophosphorane acceptor and a classical thiourea donor unit. As described by the authors, the proposed transition state TS21 may involve the initial deprotonation of the phosphite reagent by the superbase unit, while the thiourea donor activates the imine electrophile 226 by hydrogen bonding. The resulting tetrasubstituted α-aminophosphonates 242 are obtained in excellent yields (78–99%) and with moderate to good enantiomeric excesses (46–71% ee). The first example of a hydrophosphonylation reaction of cyclic ketimines was described in 2013 . In this case, trifluoromethylimines 243 derived from quinazolinone react with methyl, ethyl, benzyl or phenyl phosphites in the presence of a bifunctional alkaloid-thiourea catalyst IX . The reaction is highly dependent on the medium and chloroform, dichloromethane or hexane: dichloromethane (5:1) mixtures were used depending on the phosphite reagent. In this reaction, α-aminophosphonates 244 were obtained in high yields (75–91%) and excellent stereocontrol (81–93% ee). In the proposed transition state TS22 , the two hydrogens are in a gauche conformation with respect to the bulkiest groups, in order to minimize steric interactions. The basic amine and the donor thiourea adopt also a gauche conformation, so that they can simultaneously activate the nucleophilic phosphite and the electrophilic imine 243 , through an acid–base interaction and a double hydrogen bond of the thiourea moiety with nitrogen and carbonyl, respectively. This rigid conformation directs the nucleophilic attack from the Re- face and supports the stereochemical outcome of the addition adduct with the R configuration. Later on, the addition of diphenyl phosphite to isatin-derived ketimines 245 using a bifunctional organocatalyst derived from squaramide XXX was described with high yields (85–96%) and excellent stereocontrol (83–97% ee) . The use of 4-Br or 5-NO 2 as substituents into the phenyl ring of the isatin derivatives decreases notably the enantioselectivities in this reaction, with excesses between 52% and 68%. Taking into account the poor results obtained for the addition of phosphites to the analogous ketones, the authors figure out an essential role of the iminic protecting group and propose transition state TS23 , where the squaramide unit binds with isatin-derived imine while quinuclidine moiety deprotonates the phosphite reagent, in order to facilitate the reaction. A preferential Re- face attack of the phosphite to ketimine 245 affords the ( R )-isomer of α-aminophosphonates 246 . In the same year, a similar approach was described by Chimni’s group, that is, the addition of diphenyl phosphite to ketimines 245 , mediated by Chinchona -derived catalyst XXXI, to provide similar yields (72–88%) and enantiomeric excesses (71–97% ee) . Another example regarding the addition of phosphites to isatin-derived imines 245 was described by Kim in 2016, using a bifunctional squaramide XXXII originated from binaphthyl . The catalytic system is very efficient for unsubstituted isatins or those substituted with alkyl groups, obtaining yields up to 94% and enantioselectivities ranging from 73% to 99%. It should be noted that the use of an allyl or benzyl protecting group at the nitrogen of the amide moiety is strongly relevant for this process, since the presence of a carbamate group (R 1 = Boc) in that position results in a strong drop in the yield (45%) and almost a total loss of stereocontrol (26% ee). Following the line of the previous examples, the proposed transition state is expected to be similar to TS23 , showing a bifunctional activation of the phosphite and the ketimine 245 . In this conformation, the phosphite nucleophile may attack preferentially the Re- face of the imine. Some enantioselective methods for the addition of phosphites to imines derived from isatins 245 , making use of metallic catalysts instead of organocatalysis have been also developed. In 2016, titanium–Salen complex Ti- XXXIII was used as catalyst, in the hydrophosphorylation reaction of ketimines 245 to provide tetrasubstituted α-aminophosphonates 246 with high yields (84–88%) and good enantiomeric excesses (63–99%) . It should be noted that the amide group of the isatin moiety must be substituted, since the presence of an unprotected nitrogen resulted in a significant decrease in enantiomeric excess (46% ee), although without affecting the yield (88%). A useful strategy for the in situ generation of cyclic ketimines and their subsequent activation for the nucleophilin addition of a phosphite reagent was reported by Singh in 2017 . In this report, ketimines are produced from α-hydroxyamines 247 and then activated by the presence of phosphoric acid catalyst XXXIV as in TS24 , leading to the formation of tetrasubstituted isoindolinones 248 , with yields up to 98% and high enantioselectivities (72–97%). Furthermore, the addition of diphenyl phosphite to azirines 249 was described a few years ago, using zinc complexes with chiral bisimidazolines XXXV , with yields up to 99% and enantiomeric excesses between 80% and 96% . The resulting phosphorus-substituted aziridines 250 were converted to oxazolines 253 through an initial acylation of the nitrogen with 3,5-dinitrobenzoyl chloride ( 251 ), followed by treatment with boron trifluoride. Additionally, the ring opening of aziridines 250 with hydrobromic acid under ultrasound treatment, leads to the corresponding β-bromo α-aminophosphonates 254 . The halogen atom can be also removed by a radical reaction to give α-aminophosphonate 255 without any loss of enantiomeric purity (92% ee). The proposed pathway for this transformation may consist of an initial formation of a Zn(II)– XXXV complex 256 in which the metal coordinates with the oxygen and only one of the imidazoline moieties of the ligand, leaving the other nitrogen atom free. The zinc atom then coordinates with azirine 249 in a tetrahedral mode ( TS25 ), and the phosphite moiety establishes a hydrogen bond with the second free imidazoline unit in TS26 . Azirine is therefore positioned with its substituent facing outwards, favoring a Re- face attack of the nucleophile. After the addition and formation of the corresponding phosphorus aziridines 250 the Zn catalyst is released, in order to be able to return to the catalytic cycle . 3.3. C-N Bond Formation Although there are only a few reports in the literature, the electrophilic amination reaction of trisubstituted phosphonates can also be an effective method for the preparation of tetrasubstituted α-aminophosphonates. In 2005, Jørgensen and Kim described, almost simultaneously, the electrophilic amination of β-ketophophosphonates with azodicarboxylates. Jørgensen’s method makes use of a zinc–oxazolidine complex XXXVI, in order to generate an enolate from β-ketophosphonate 257 , which undergoes an enantioselective nucleophilic addition to the nitrogen electrophile 258 , providing the corresponding α-aminophosphonates 259 with yields up to 98% and excellent stereocontrol (85–98% ee) . On the other hand, Kim described a similar reaction using a chiral palladium-complex XXXVII in order to catalyze the addition of β-ketophosphonates 260 to diethyl azocarboxylates 261 with similar yields (68–92%) and enantiomeric excesses (99% ee) . Finally, although only one example is reported with an acceptable yield (72%) and moderate enantioselectivity (50% ee), phosphonate-substituted aziridine 265 is obtained from α, β-unsaturated β-ketophosphonates 263 using a bifunctional catalyst derived from thiourea XXXVIII . The transition state TS27 might consist of a double activation of the ketophosphonate 263 by the thiourea unit while the basic unit deprotonates the nitrogen of the hydroxylamine derivative 264 . Then, the amphiphilic oxycarbamate reacts with the olefin through a conjugate addition reaction, while the enolate attacks the nitrogen atom of the amine, with concomitant release of the tosyl group. The first example of an enantioselective synthesis of tetrasubstituted chiral α-amino phosphonates was reported in 1999 by Ito’s group . The reaction consists of an asymmetric palladium– IV -catalyzed allylation of racemic β-keto-α-aminophosphonates 138 that allows the obtaining of optically active α-amino phosphonates 140 in moderate to good yields and enantiocontrol (27–87%, 46–88% ee). In addition, the authors also report the subsequent diastereoselective reduction of the ketone moiety, affording β-hydroxy-α-amino phosphonates 141 . Remarkably, when the reaction is carried out in methanol at low temperature, using sodium or tetra n -butylammonium borohydrides as reducing reagents, the 141 -syn isomer is obtained (74–89%, 74:26 to 82:18 syn / anti ratio). In contrast, the reaction in tert -butyl alcohol at 50 °C, using sodium borohydride, affords the opposite isomer (78%, 15:85 syn / anti ratio). Due to their strong nucleophilic character, the use of nitroalkane enolates has been widely extended in organic chemistry in the functionalization of aldehydes or imines since the Henry reaction was reported in 1895. Several examples regarding the enantioselective nucleophilic addition of α-nitrophosphonates to different electrophile reagents for the synthesis of α-aminophosphonates have been reported in recent years. In this context, the first example of this reaction was reported in 2008 by Johnston and colleagues . The reaction consists of a Brönsted acid V -catalyzed addition of trisubstituted nitrophosphonates 142 to N -Boc aldimines 143 , which leads to chiral phosphonates 144 in yields ranging from 48% to 86% and with high stereocontrol (up to 94:6 dr, up to 99% ee). Next, the reduction and hydrolysis of α-nitrophosphonate 144 (Ar = p -PhOC 6 H 4 ) under acidic conditions results in the formation of enantioenriched α,β-diaminophosphonate 145 in 84% yield with retention of the absolute configuration. According to the authors, the chiral Brönsted acid catalyst activates both the imine and nitro phosphonate substrates through hydrogen bonding. Since both nitro and phosphoryl groups may be activated by acid catalysts, a bulky phosphonate is selected in order to minimize the activation of the phosphoryl group enhancing the steric repulsion. The nitro group is, in this way, located close to the catalyst and the 144 - anti product is favored through transition state TS10 , while the high bulkiness of the phosphonate moiety prevents the formation of TS11 and 144 - syn isomer is obtained as the minor product. As a continuation of this work, Namboothiri and colleagues reported in 2012 the conjugate addition of α-nitrophosphonates 146 to α,β-unsaturated ketones 147 uing in this case cinchona alkaloid-derived thiourea VI as a bifunctional catalyst . For this transformation, a tentative model of addition is proposed by the authors, where both acidic protons of the thiourea moiety activate ketones by means of hydrogen bonding, while the basic nitrogen of the quinuclidine unit stabilizes the nitrophosphonate anion ( TS12-13 ). Although the reaction affords optically active α-nitrophosphonates 148 in high yields (70–97%), the enantioselectivity of the reaction is found to be strongly dependent on the substituent of the ketone. Thus, when aromatic groups bearing electron-donating substituents are used, enantioselectivities ranging from 70% to 94% are obtained. In contrast, the use of some electron poor aromatic groups, such as 4-nitrophenyl, heteroaryl substituents such as 2-furyl or 2-thienyl, and aliphatic cyclohexyl substituents results in a drastic decrease into the enantioselectivity (35–44%). Following their interest in this reaction, a few years later, the same authors proposed that the use of stronger acidic squaramide catalyst VII in the reaction improves the poor enantioselectivity when electron-withdrawing groups are used . A relevant improvement not only on the stereocontrol but also on the reactivity was reported. Thus, yields above 90% and enantioselectivities ranging from 85% to 99% for all the aromatic and heteroaromatic ketones except for 2-substituted aryl groups such as 2-Cl-C 6 H 4 (97%, 74% ee) and 1-naphthyl (98%, 15% ee) are obtained. In contrast, cyclohexyl substituted enone only affords moderate yield and enantioselectivity (70%, 51% ee). In addition, the authors reported several useful transformations of the obtained optically active α-nitrophosphonates into α-aminophosphonates . For instance, the reduction of the nitro group in the presence of zinc and ammonium chloride results in the unprotected amino group that spontaneously leads to the formation of cyclic imine 149 in 86% yield. On the other hand, the Baeyer–Villiger oxidation allows obtaining of the corresponding ester 150 in almost quantitative yield. This ester 150 can be subsequently used for the synthesis of cyclic lactam 151 after reduction of the nitro group and subsequent intramolecular lactamization reaction . Moreover, the reaction of ester 150 with a primary amine to yield the acyclic amide 152 , followed by the Clemmensen reduction of the nitro group affords the acyclic α-aminophosphonate derivative 153 . Following the same line, in 2013, Jászay and colleagues reported the addition of α-nitrophosphonates 154 to aryl acrylates 155 using also an squaramide organocatalyst VIII . Even though some bulky phosphonates were tested in the reaction with phenyl acrylate, the use of iso -propyl and butyl phosphonates does not result in a further improvement on the reaction yield or enantiocontrol (82–85%, 52–64% ee) and the best enantiomeric excesses are obtained with ethyl phosphonates (93%, 76% ee). Concerning aryl acrylate substrates, the best enantioselectivities are obtained for electron-donating aryl groups (e.g., 2,6-(OMe) 2 C 6 H 3 , 92%, 96% ee). In contrast, although no relevant effect is observed on the reaction yield, the use of strongly electron-withdrawing aromatic rings such as 2-NO 2 C 6 H 4 results in a lower stereocontrol (90%, 40% ee). Besides, the reduction of the nitro group results in a mixture of phosphorus substituted γ-lactam 151 and cyclic α-iminophosphonate 157 in variable ratios depending on the reaction pressure and the aryl groups present on the ester moiety. Other Michael acceptors different than conjugated carbonyl compounds have been used in enantioselective reactions with α-nitrophosphonates. Specifically, in 2012 Mukherjee’s group reported the use of thiourea–alkaloid bifunctional catalyst IX for the addition of α-nitrophosphonates 146 to conjugated nitroalkenes 158 to provide tetrasubstituted α-nitrophosphonates 159 , in yields ranging from 64% to 82% when using both, aryl and alkyl groups on the nitroalkene . Curiously, when 2-naphthyl nitroalkene is used, the corresponding α-nitrophosphonate 159 is formed in only 38% yield. Nevertheless, diastereomeric ratios ranging from 83:17 to 95:5 and enantiomeric excesses up to >99% are obtained in all cases. The concomitant reduction of both nitro groups results in an intramolecular cyclization reaction, which leads to chiral pirazolidine 160 in 60% yield with no loss of the optical purity. In 2013, the use of vinyl sulfones as Michael acceptors in the addition of α-nitrophosphonates 146 was simultaneously reported by Namboothiri and Lu . Namboothiri proposes an enhancement of the electrophilicity of vinyl sulfone substrates through the establishment of two hydrogen bonds between the squaramide catalyst X acidic protons and the sulfone oxygen atoms, while the basic nitrogen of the alkaloid moiety activates the α-nitrophosphonate as a nucleophile ( , TS14 ). The reaction products are obtained in this case in excellent yield and enantiocontrol when aryl sulfones 161 are used (85–99%, 90–98% ee). In contrast, the use of tetrazole-derived sulfone results in a decrease in the enantiomeric excesses (74–79% ee). Moreover, the reduction of the nitro group affords α-aminophosphonate 163 in almost quantitative yield (95%). Slightly lower yields but similar enantiomeric excesses were obtained by Lu and colleagues by using thiourea catalyst XI (50–98%, 86–95% ee), obtaining, in this case, the opposite enantiomer ( S )- 162 . Analogously to α-nitrophosphonates, α-isothiocyanatophosphonates 164 possess a significantly acidic proton in the alpha position, which makes them suitable to be used as nucleophiles. In this regard, Yuan and colleagues reported in 2013 the enantioselective addition of α-isothiocyanatophosphonates 164 to aldehydes 165 catalyzed by bifunctional thiourea catalyst IX . In this case, the initial nucleophilic addition to aldehydes 165 proceeds in a similar manner as in the case of α-nitrophosphonates, through the activation of aldehyde electrophile 165 by the thiourea acidic protons and a simultaneous deprotonation of α-isothiocyanatophosphonates 164 by the quinuclidine basic unit of the catalyst ( TS15 ). Due to the electrophilic character of the central carbon in isothiocyanates, a subsequent intramolecular addition of the alcohol occurs ( TS16 ), leading to the formation of cyclic α-aminophosphonates 166 in moderate to good yield and diastereocontrol (36–93%, 84:16 to >99:1 dr). However, only moderate enantiomeric excesses are obtained (68–81% ee) if aromatic aldehydes are used. Besides, the use of acetaldehyde as electrophile results in a dramatic drop into the enantioselectivity (66%, 85:15 dr, 55% ee). One year later, Wang and colleagues reported an improvement in Yuan’s work using in this case squaramide catalyst XII . In this reaction, they obtain cyclic α-aminophosphonates 169 in excellent yield and stereocontrol using aldehydes 165 bearing not only electron-donating and electron-withdrawing groups, but also heteroaryl (2-furyl, 2-thienyl) aldehydes or conjugated cinnamaldehyde (86–99%, 94:6 to >95:5 dr, 87 ≥ 99% ee). Moreover, they also extended this methodology to N -tosyl aldimines 168 , cyclic thioureas 170 with similar results (80–99%, 86:14 to 92:8 dr, 92 ≥ 99% ee). More recently, Albrecht and colleagues described the synthesis of tetrasubstituted spirocyclic chiral α-aminoesters and α-aminophosphonates 173 through a conjugate addition of α-isocyanates 171 to conjugated barbiturates 172 in the presence of squaramide catalyst X . Although the scope is limited, spirocyclic α-aminophosphonates 173 are obtained in high yields and stereocontrol (60–99%, 88:12 to >95:5 dr, 92–98% ee). So far, the reactions described in this chapter, giving an account of reactions where the key step implies the formation of a C-C bond, entail the addition of an α-aminophosphonate equivalent onto an electrophile ( a; vide supra). A complementary general method involving the generation of C-C bonds that also leads to tetrasubstituted α-aminophosphonates consists of the addition of carbon nucleophiles to α-iminophosphonates ( b; vide supra) . The synthesis of activated α-ketiminophosphonate substrates is known to be a challenging task, mainly due to the low reactivity found in the typical amine-carbonyl condensation reactions, where deactivated amide substrates are required, and the intrinsic tendency of α-ketophosphonates to eliminate the phosphonate substituent, which leads to acylation reactions . Moreover, the high moisture sensitivity of such substrates entails additional obstacles for the purification of the imines which very often have to be prepared in situ. For this reason, it was not until 2012 when our research group reported an efficient synthesis of α-ketiminophosphonates 174 and the enantioselective addition of nucleophiles to such substrates . In this reaction, the cinchonidine ( XIII )-catalyzed nucleophilic addition of cyanide to α-phosphorated ketimines 174 provides optically active α-cyano α-aminophosphonates 176 in high yield (75–80%) and enantioselectivities ranging from 73% to 92%. The presence of bulky isopropyl groups was found to be crucial in order to obtain high enantiocontrol if compared with other alkyl and aryl phosphonates. The fact that in alcoholic solvents, the reaction proceeds fast but with no enantiocontrol might indicate a crucial role for the hydroxyl group of cinchona alkaloid in the transition state TS17 , which may activate the substrate via hydrogen bonding with the iminic nitrogen. In addition, optically active α-aminophosphonic acid 177 was obtained in 80% yield without racemization by the hydrolysis reaction of the cyano group under strong acidic conditions. As a continuation of our research on the enantioselective nucleophilic addition reactions to α-ketiminophosphonates, a few years later, we reported the asymmetric aza-Henry reaction with ketimines 178 , using bifunctional thiourea-alkaloid a catalyst XIV . The reaction allows the use of electron-donating and electron-withdrawing aromatic groups at the imine substrates with no relevant differences in the yield or the enantioselectivity of the obtained α-amino-β-nitrophosphonates 180 (82–87%, 80–84% ee). In addition, the reduction of the nitro group is also reported, leading to α,β-diaminophosphonate 181 in almost quantitative yield. In the same context, more recently, we described the first example of an enantioselective aza-Reformatsky reaction with non-cyclic ketimines 178 , using dialkyl zinc reagents and BINOL-derived chiral ligand XV . The presence of molecular oxygen is crucial in this case in order to obtain a high yield, since other byproducts are observed when an inert atmosphere is used. The reaction can be successfully generalized to several aryl and heteroaryl ketimines 178 and alkyl iodoacetates 182 , affording tetrasubstituted α-aminophosphonates 183 in excellent yield and enantiocontrol (76–92%, 93–>99% ee). Furthermore, the synthesis of β-lactam 185 containing a tetrasubstituted α-aminophosphonate is also described by a selective deprotection of the ester group and subsequent lactamization reaction. Although enantioselective nucleophilic additions to α-alkyl iminophosphonates remains almost unexplored, in 2014, a particular case using α-trifluoromethyl α-iminophosphonates was reported by Onys’ko and colleagues . In particular, proline ( I )-catalyzed nucleophilic addition of acetone ( 33 ) to N -unprotected α-iminophosphonate 186 yields tetrasubstituted α-iminophosphonate 187 in high yield and enantiocontrol (80%, 90% ee). Moreover, some further transformations of α-aminophosphonate 187 are reported by the authors. For instance, the reaction of substrate 187 with 2,5-dimethoxyfuran in acid media leads to N -heterocyclic derivative 189 in 84% yield through an aldol reaction of the in situ generated pyrrole 188 . On the other hand, the reaction with aryl isocyanates leads to pyrimidine 191 via urea intermediate 190 . As in the previous case, an intramolecular condensation involving the ketone moiety affords substrate 191 in 89% yield without racemization. Following a similar approach, Ohshima and colleagues reported some examples of a rhodium complex XVI -catalyzed enantioselective alkynylation of α-CF 3 α-iminophosphonates 192 . Once the alkyne 193 is inserted into the catalyst by displacement of TMS substituted alkyne ( 195 ), an enantioselective alkynylation of imine 192 takes place leading to amide–rhodium complex 196 , where a new chiral center is formed. Then, a new insertion occurs by the introduction of a second unit of alkyne 193 , leading to the formation of rhodium complex 197 . Finally, amide deprotonation of the terminal alkyne ends with the formation of α-alkynyl α-iminophosphonates 194 and the consequent regeneration of the active catalyst 195 . The use of aryl and cyclopropyl provides α-aminophosphonates 194 in excellent yields and enantioselectivity (86–99%, 80–93% ee). Following a similar approach, in 2012, Che’s group reported the use of chiral rhodium catalysts XVII and XVIII on the enantioselective multicomponent reaction of diazophosphonates 198 , anilines 199 and aromatic aldehydes 165 . Here, in the first reaction step, the rhodium catalyst reacts with diazo compound 198 to form rhodium carbene species 201 with the release of nitrogen gas. The subsequent insertion of aniline moiety leads to an ionic intermediate 202 , which easily evolves to zwiterionic species 203 . At this point, the rhodium-phosphonate undergoes an addition reaction to the corresponding aldehyde substrate 165 , leading to optically active tetrasubstituted α-amino-β-hydroxy phosphonates 200 while the catalyst unit is released for a new catalytic cycle. In this reaction, tetrasubstituted α-aminophosphonates 200 are obtained in moderate to excellent yield and stereocontrol (56–86%, 76:24 to 94:6 dr, 60–98% ee). During the last lustrum, a new family of cyclic α-ketiminophosphonates 204 has been used as electrophilic sources in enantioselective nucleophilic addition reactions. In particular, the palladium– XIX -catalyzed enantioselective arylation reaction of α-iminophosphonates 204 was reported in 2016 by Zhou’s group . The reaction can be generalized to several α-iminophosphonate substrates 204 and boronic acids 205 , bearing electron-donating and electron-withdrawing aryl groups. In consistence with other reported examples, the use of bulky iso -propyl phosphonates results in higher enantiocontrol, providing cyclic α-aminophosphonates 206 in excellent yields (73–97%) and enantioselectivities above 99%. The high steric bulkiness of the phosphonate moiety induces the coordination of the catalyst to the imine group through Si -face ( 207 ) since the coordination through Re -face ( 208 ) implies steric repulsions not only between phosphine and phosphonate moieties but also between tert -butyl group at the oxazoline ring and sulfonyl protecting group at the imine group. One year later, the enantioselective Friedel–Crafts reaction of indoles 210 to five-membered imines 209 was also reported . In this case, phosphoric acid XX was selected as the optimal catalyst, affording optically active α-aminophosphonate functionalized indoles 211 . The reaction can be successfully generalized to several indole substrates bearing electron-donating and electron-withdrawing groups with excellent yields and enantiocontrol (85–98%, 87–98% ee). However, the use of 2-methylindole results in a drastic drop in the enantioselectivity (91%, 59% ee). In addition, the addition of simple pyrrole to imines 209 , leads to the formation of the analogous α-aminophosphonate substituted pyrroles in 98% yield with 84% enantiomeric excess. In the same context, Zhang and colleagues described in 2018 a single example on the enantioselective Mannich-type addition of glycine Schiff bases 213 to five-membered iminophosphonates 212 , providing tetrasubstituted α-aminophosphonate 214 in moderate yield and stereocontrol (48%, 80:20 dr, 83% ee) . In the same year, Ma and colleagues reported the enantioselective decarboxylative addition of β-keto acids 215 to cyclic α-iminophosphonates 204 . The reaction affords α-amino-β-ketophosphonates 216 when five- or six-membered cyclic imines 204 are used as electrophile substrates. In addition, several alkyl and (hetero)aryl keto acids were tested, obtaining in all cases excellent yields and enantiocontrol (77–93%, 90–99% ee), and allowing a decrease in the catalyst loading down to 1% without any lose in the enantioselectivity. Likewise, the reduction of the ketone to obtain chiral alcohol 217 (88%, 94:6 dr) and the synthesis of aziridine 218 in presence of tert -butyl hydroperoxide (61%, single diastereoisomer) are described with high yields and diastereocontrol. Besides reactions that imply C-C bond formation, through the functionalization of trisubstituted α-aminophosphonate derivatives with electrophiles or by the addition of nucleophiles to α-iminophosphonates, cycloaddition reactions are also efficient protocols leading to the formation of optically active tetrasubstituted α-aminophosphonate derivatives. The first example of such reaction was published in 2011 by Kobayashi . In particular, they reported a [3+2] reaction between Schiff bases 219 and tert -butyl acrylate 220 , using silver hexamethyldisilazane as a catalyst and chiral bisphosphine ligand XXIII . The reaction can be successfully generalized to several chiral pyrrolidines 221 showing a tetrasubstituted α-aminophosphonate moiety in excellent yields and stereocontrol (72–81%, >99:<1 dr, 90–98% ee). According to the authors, the in situ formed silver– XXIII complex catalyzes the enolization of the phosphoryl group, leading to the active reagent for the exo -[3+2] cyclization through transition state TS18 . More recently, an example of a dipolar cycloaddition was reported by Peng and colleagues . The reaction between diazophosphonates 222 and acryloyl oxazolidones 223 in presence of Mg– XXIV complex afforded enantioenriched pyrazolines 224 in high yield and enantiocontrol. For this reaction, the authors propose an activation of the electrophile through a double coordination of the chiral Mg-complex to both carbonyl groups of acryloyl oxazolidones 223 , inducing the preliminary addition of the nucleophile from the Re -face in TS19 . The subsequent trapping of the in situ formed enolate by the nitrogen atom of the diazo compound leads to cyclic α-aminophosphonates 224 in moderate to excellent yield and enantiocontrol (52–93%, 74–95% ee). In addition, the double reduction of the pyrazoline and oxazolidone moieties, followed by treatment with carbonyl di-imidazole (CDI), leads to bicyclic pyrazolidine 225 in 84% yield. On the other hand, Boc-protected pyrazoline 226 can be also obtained in high yield (86%) by treatment with terc -butyl dicarbonate. Next, the selective deprotection of oxazolidone moiety by a reduction reaction with sodium borohydride, and the subsequent protection of the resulting alcohol affords pyrazoline 227 in 63% yield. Finally, the reduction of pyrazoline, and the in situ Cbz-protection of the newly formed NH group leads to pyrazolidine 228 in 86% yield. Moreover, although not strictly a cycloaddition reaction, in 2018, Chi’s group reported the use of five- and six-membered cyclic α-iminophosphonates 229 in a N -heterocyclic carbene XXV -catalyzed formal [4+2] cycloaddition with α,β-unsaturated aldehydes 230 . In the catalytic cycle proposed by the authors, after the activation of carbene catalyst XXV* , the reaction starts with an addition of the carbine to aldehyde substrate to generate intermediate 233 . Then, a stoichiometric amount of an oxidant reagent is needed to re-generate the carbonyl group in species 234 . Next, a basic source generates enolate 235 and the enantioselective vinylogous addition to the imine takes place giving rise to adduct 236 . The cycle ends with an intramolecular addition of the nitrogen atom to the carbonyl moiety, which leads to cyclic α-aminophosphonates 231 or 232 , after releasing the active carbene catalyst XXV* . The reaction products were obtained in moderate to excellent yield (51–96%) and enantioselectivities above 92%. Remarkably, simultaneously to Chi’s work, Ye and colleagues reported the same transformation in similar reaction conditions with a different NHC catalyst, resulting in the formation of the opposite enantiomer of α-aminophosphonates 231 and 232 . One of the simplest strategies for the preparation of α-aminophosphonates is the addition of phosphorus nucleophiles to imines. Although the first catalytic asymmetric hydrophosphonylation of aldimines was described by Shibasaki in 1995 , it is easy to understand the further complication of developing a catalytic system that works on ketimines, due to the more difficult discrimination between both faces in the prochiral species if compared to aldimines . Therefore, it was not until 2009 when Nakamura’s group described the first nucleophilic addition of diphenyl phosphite to N -sulfonyl ketimines 237 catalyzed by cinchona alkaloids XXVI and XXVII . In the presence of a base and 2% of alkaloid XXVI, tetrasubstituted α-aminophosphonates 238 in excellent yields (93–99%) are obtained, with moderate to excellent enantiomeric excesses (55–97% ee). The use of hydroquinidine XXVII epimer as organocatalyst results equally in the formation of α-aminophosphonates 238 with the opposite configuration, in yields up to 99% with good stereocontrol (52–95% ee). Since it was evidenced that the reaction does not work without the presence of a base, the transition state TS20 proposed by the authors might consist in a coordination between the alkaloid nitrogen with the sodium cation that would improve the nucleophilic character of the phosphite reagent. Furthermore, the use of an alkaloid with a protected alcohol group results in a drop in the enantioselectivity, which may indicate a dual activation mode of the alkaloid in which the hydroxyl group activates the imine 237 by hydrogen bonding. A few years later, Shibasaki reported the hydrophosphorylation reaction of N -thiophosphinyl imines 239 with various phosphites, using copper complexes and chiral bis-phosphine-based ligands as catalysts, thus obtaining tetrasubstituted α-aminophosphonates 240 with excellent enantioselectivities ranging between 86% and 97% . The reaction works very efficiently even using only 0.5–2% copper– XXVIII catalyst, which can also be reused. Another approach that can be used for the hydrophosphorylation reaction of α-phosphorated ketimines consists in the use of N -phosphinyl imines 241 in the presence of a bifunctional thiourea–iminophosphorane catalyst XXIX . This type of catalyst has a superbase iminophosphorane acceptor and a classical thiourea donor unit. As described by the authors, the proposed transition state TS21 may involve the initial deprotonation of the phosphite reagent by the superbase unit, while the thiourea donor activates the imine electrophile 226 by hydrogen bonding. The resulting tetrasubstituted α-aminophosphonates 242 are obtained in excellent yields (78–99%) and with moderate to good enantiomeric excesses (46–71% ee). The first example of a hydrophosphonylation reaction of cyclic ketimines was described in 2013 . In this case, trifluoromethylimines 243 derived from quinazolinone react with methyl, ethyl, benzyl or phenyl phosphites in the presence of a bifunctional alkaloid-thiourea catalyst IX . The reaction is highly dependent on the medium and chloroform, dichloromethane or hexane: dichloromethane (5:1) mixtures were used depending on the phosphite reagent. In this reaction, α-aminophosphonates 244 were obtained in high yields (75–91%) and excellent stereocontrol (81–93% ee). In the proposed transition state TS22 , the two hydrogens are in a gauche conformation with respect to the bulkiest groups, in order to minimize steric interactions. The basic amine and the donor thiourea adopt also a gauche conformation, so that they can simultaneously activate the nucleophilic phosphite and the electrophilic imine 243 , through an acid–base interaction and a double hydrogen bond of the thiourea moiety with nitrogen and carbonyl, respectively. This rigid conformation directs the nucleophilic attack from the Re- face and supports the stereochemical outcome of the addition adduct with the R configuration. Later on, the addition of diphenyl phosphite to isatin-derived ketimines 245 using a bifunctional organocatalyst derived from squaramide XXX was described with high yields (85–96%) and excellent stereocontrol (83–97% ee) . The use of 4-Br or 5-NO 2 as substituents into the phenyl ring of the isatin derivatives decreases notably the enantioselectivities in this reaction, with excesses between 52% and 68%. Taking into account the poor results obtained for the addition of phosphites to the analogous ketones, the authors figure out an essential role of the iminic protecting group and propose transition state TS23 , where the squaramide unit binds with isatin-derived imine while quinuclidine moiety deprotonates the phosphite reagent, in order to facilitate the reaction. A preferential Re- face attack of the phosphite to ketimine 245 affords the ( R )-isomer of α-aminophosphonates 246 . In the same year, a similar approach was described by Chimni’s group, that is, the addition of diphenyl phosphite to ketimines 245 , mediated by Chinchona -derived catalyst XXXI, to provide similar yields (72–88%) and enantiomeric excesses (71–97% ee) . Another example regarding the addition of phosphites to isatin-derived imines 245 was described by Kim in 2016, using a bifunctional squaramide XXXII originated from binaphthyl . The catalytic system is very efficient for unsubstituted isatins or those substituted with alkyl groups, obtaining yields up to 94% and enantioselectivities ranging from 73% to 99%. It should be noted that the use of an allyl or benzyl protecting group at the nitrogen of the amide moiety is strongly relevant for this process, since the presence of a carbamate group (R 1 = Boc) in that position results in a strong drop in the yield (45%) and almost a total loss of stereocontrol (26% ee). Following the line of the previous examples, the proposed transition state is expected to be similar to TS23 , showing a bifunctional activation of the phosphite and the ketimine 245 . In this conformation, the phosphite nucleophile may attack preferentially the Re- face of the imine. Some enantioselective methods for the addition of phosphites to imines derived from isatins 245 , making use of metallic catalysts instead of organocatalysis have been also developed. In 2016, titanium–Salen complex Ti- XXXIII was used as catalyst, in the hydrophosphorylation reaction of ketimines 245 to provide tetrasubstituted α-aminophosphonates 246 with high yields (84–88%) and good enantiomeric excesses (63–99%) . It should be noted that the amide group of the isatin moiety must be substituted, since the presence of an unprotected nitrogen resulted in a significant decrease in enantiomeric excess (46% ee), although without affecting the yield (88%). A useful strategy for the in situ generation of cyclic ketimines and their subsequent activation for the nucleophilin addition of a phosphite reagent was reported by Singh in 2017 . In this report, ketimines are produced from α-hydroxyamines 247 and then activated by the presence of phosphoric acid catalyst XXXIV as in TS24 , leading to the formation of tetrasubstituted isoindolinones 248 , with yields up to 98% and high enantioselectivities (72–97%). Furthermore, the addition of diphenyl phosphite to azirines 249 was described a few years ago, using zinc complexes with chiral bisimidazolines XXXV , with yields up to 99% and enantiomeric excesses between 80% and 96% . The resulting phosphorus-substituted aziridines 250 were converted to oxazolines 253 through an initial acylation of the nitrogen with 3,5-dinitrobenzoyl chloride ( 251 ), followed by treatment with boron trifluoride. Additionally, the ring opening of aziridines 250 with hydrobromic acid under ultrasound treatment, leads to the corresponding β-bromo α-aminophosphonates 254 . The halogen atom can be also removed by a radical reaction to give α-aminophosphonate 255 without any loss of enantiomeric purity (92% ee). The proposed pathway for this transformation may consist of an initial formation of a Zn(II)– XXXV complex 256 in which the metal coordinates with the oxygen and only one of the imidazoline moieties of the ligand, leaving the other nitrogen atom free. The zinc atom then coordinates with azirine 249 in a tetrahedral mode ( TS25 ), and the phosphite moiety establishes a hydrogen bond with the second free imidazoline unit in TS26 . Azirine is therefore positioned with its substituent facing outwards, favoring a Re- face attack of the nucleophile. After the addition and formation of the corresponding phosphorus aziridines 250 the Zn catalyst is released, in order to be able to return to the catalytic cycle . Although there are only a few reports in the literature, the electrophilic amination reaction of trisubstituted phosphonates can also be an effective method for the preparation of tetrasubstituted α-aminophosphonates. In 2005, Jørgensen and Kim described, almost simultaneously, the electrophilic amination of β-ketophophosphonates with azodicarboxylates. Jørgensen’s method makes use of a zinc–oxazolidine complex XXXVI, in order to generate an enolate from β-ketophosphonate 257 , which undergoes an enantioselective nucleophilic addition to the nitrogen electrophile 258 , providing the corresponding α-aminophosphonates 259 with yields up to 98% and excellent stereocontrol (85–98% ee) . On the other hand, Kim described a similar reaction using a chiral palladium-complex XXXVII in order to catalyze the addition of β-ketophosphonates 260 to diethyl azocarboxylates 261 with similar yields (68–92%) and enantiomeric excesses (99% ee) . Finally, although only one example is reported with an acceptable yield (72%) and moderate enantioselectivity (50% ee), phosphonate-substituted aziridine 265 is obtained from α, β-unsaturated β-ketophosphonates 263 using a bifunctional catalyst derived from thiourea XXXVIII . The transition state TS27 might consist of a double activation of the ketophosphonate 263 by the thiourea unit while the basic unit deprotonates the nitrogen of the hydroxylamine derivative 264 . Then, the amphiphilic oxycarbamate reacts with the olefin through a conjugate addition reaction, while the enolate attacks the nitrogen atom of the amine, with concomitant release of the tosyl group. Even though some examples of stereocontrolled synthesis of tetrasubstituted α-aminophosphonates have been reported, they are still rather limited if compared with the homologous reactions for the preparation of trisubstituted α-aminophosphonates. In particular, in recent years, the main efforts have been focused on the enantioselective transformations, which are known to be more attractive than diastereoselective ones. It should be noted that most of the enantioselective reactions summarized in this review were published during the past decade and thus, more articles regarding this approach are expected in the following years. One of the most promising topics is related to nucleophilic additions to α-phosphorylated ketimines, which experienced an important growth during the last lustrum. Another promising topic is the enantioselective addition of phosphorus nucleophiles to ketimines. It has been slightly explored, with just a few examples reported to date, but due to the vast number of synthetic protocols for the preparation of imines known in the literature, the development of new enantioselective protocols for this transformation would constitute a relevant improvement in order to expand the structural diversity of tetrasubstituted α-aminophosphonates.
Safe Deferral of Coronary Computed Tomography Angiography for Patients With a Low Pretest Probability of Coronary Artery Disease in 2019 European Society of Cardiology Guidelines
ab5c472b-8ec1-441c-bb02-92c49787f1f1
10727332
Internal Medicine[mh]
The 2019 update in European Society of Cardiology guidelines on chronic coronary syndromes suggests that 66% of the coronary computed tomography angiographies previously indicated could now be deferred because of low likelihood of coronary artery disease. Our study shows that, in this group of patients, the yield of coronary artery disease is low, and adverse cardiac events are rare. Pretest probability estimation can be safely used to prevent the overuse of cardiac imaging and to search for other causes of chest pain. The data that support the findings of this study are available from the corresponding author on reasonable request. Study Population The patients who underwent CCTA at Helsinki University Hospital during the years 2009 to 2017 for the suspicion of CAD were enrolled in the study. A registry was compiled retrospectively. Chest pain was classified according to previously accepted criteria. Clinical and laboratory variables at the time of hospitalization were collected from the electronic health records. Exercise stress testing was considered positive if >2 mm horizontal or downsloping ST‐segment depression was present in at least 2 adjacent leads. PTP for CAD was assessed using 2 methods: an updated Diamond‐Forrester model recommended by ESC guidelines in 2013 (ESC‐2013) and the contemporary revisited model in 2019 ESC guidelines (ESC‐2019). The patients were categorized into 2 groups according to the 2013 ESC guidelines on the management of stable CAD : defer (PTP ≤15%) and test (PTP >15%). A second categorization was applied according to the 2019 revision for corresponding values : defer (PTP ≤5%), consider test (PTP 5%–15%), and test (PTP >15%). The study was approved by the local Ethics Committee, and the institutional review board at Helsinki University Hospital, and the need for informed written consent was waived because of the retrospective study design. The patients and control subjects were treated according to the principles of the Declaration of Helsinki. Coronary Computed Tomography Angiography All CCTA images were obtained using a standard computed tomography scanner with ≥128‐detector rows. Imaging data were retrospectively assessed by radiologists specialized in cardiothoracic imaging. Coronary segments were visually analyzed for the presence of soft plaque, calcified plaque, or mixed plaque. The coronary artery tree was divided into 12 segments based on a modified American Heart Association classification. Coronary stenoses were classified as nonobstructive (<50% luminal stenosis in major epicardial branches), obstructive (>50% stenosis), or inconclusive when analysis could not be performed because of poor image quality or extensive calcifications. CCTA analyses were done using Syngo.Via VB30B (Siemens Healthineers). Subsequent stress imaging and invasive coronary angiography were considered CCTA driven if they occurred within 6 months of the index CCTA. Adverse Events The adverse events of the study were a composite of cardiac death, nonfatal myocardial infarction (MI), and hospitalization attributable to unstable angina later than 28 days after the index CCTA. Clinician‐assigned diagnoses and International Classification of Diseases ( ICD‐10 ) codes for every hospitalization are recorded in the Finnish national Care Register for Health Care, where all hospitalizations after index CCTA were collected. All imaging data after the index CCTA were verified from local patient records. Death, cardiac death, and death from CAD were ascertained from Statistics Finland's Causes of Death Population Information System. Data until the end of 2018 were considered for this study. Statistical Analysis Categorical variables are presented as frequencies (percentages). Normally distributed variables are presented as mean±SD, and nonnormal variables are presented as median (quartile 1–quartile 3). Categorical variables were compared with Fisher exact test, and continuous variables were compared with Student t test. The relationship between categorical variables was assessed using logistic regression with odds ratios and CIs. Survival analyses were performed using Cox regression adjusted for age and sex and expressed as hazard ratios (HRs). Annual event rates were calculated by dividing the number of events by the number of person‐years at risk. A 2‐tailed P <0.05 was considered statistically significant. All statistical analyses were performed using SPSS, version 27 (SPSS Inc, Chicago, IL). The patients who underwent CCTA at Helsinki University Hospital during the years 2009 to 2017 for the suspicion of CAD were enrolled in the study. A registry was compiled retrospectively. Chest pain was classified according to previously accepted criteria. Clinical and laboratory variables at the time of hospitalization were collected from the electronic health records. Exercise stress testing was considered positive if >2 mm horizontal or downsloping ST‐segment depression was present in at least 2 adjacent leads. PTP for CAD was assessed using 2 methods: an updated Diamond‐Forrester model recommended by ESC guidelines in 2013 (ESC‐2013) and the contemporary revisited model in 2019 ESC guidelines (ESC‐2019). The patients were categorized into 2 groups according to the 2013 ESC guidelines on the management of stable CAD : defer (PTP ≤15%) and test (PTP >15%). A second categorization was applied according to the 2019 revision for corresponding values : defer (PTP ≤5%), consider test (PTP 5%–15%), and test (PTP >15%). The study was approved by the local Ethics Committee, and the institutional review board at Helsinki University Hospital, and the need for informed written consent was waived because of the retrospective study design. The patients and control subjects were treated according to the principles of the Declaration of Helsinki. All CCTA images were obtained using a standard computed tomography scanner with ≥128‐detector rows. Imaging data were retrospectively assessed by radiologists specialized in cardiothoracic imaging. Coronary segments were visually analyzed for the presence of soft plaque, calcified plaque, or mixed plaque. The coronary artery tree was divided into 12 segments based on a modified American Heart Association classification. Coronary stenoses were classified as nonobstructive (<50% luminal stenosis in major epicardial branches), obstructive (>50% stenosis), or inconclusive when analysis could not be performed because of poor image quality or extensive calcifications. CCTA analyses were done using Syngo.Via VB30B (Siemens Healthineers). Subsequent stress imaging and invasive coronary angiography were considered CCTA driven if they occurred within 6 months of the index CCTA. The adverse events of the study were a composite of cardiac death, nonfatal myocardial infarction (MI), and hospitalization attributable to unstable angina later than 28 days after the index CCTA. Clinician‐assigned diagnoses and International Classification of Diseases ( ICD‐10 ) codes for every hospitalization are recorded in the Finnish national Care Register for Health Care, where all hospitalizations after index CCTA were collected. All imaging data after the index CCTA were verified from local patient records. Death, cardiac death, and death from CAD were ascertained from Statistics Finland's Causes of Death Population Information System. Data until the end of 2018 were considered for this study. Categorical variables are presented as frequencies (percentages). Normally distributed variables are presented as mean±SD, and nonnormal variables are presented as median (quartile 1–quartile 3). Categorical variables were compared with Fisher exact test, and continuous variables were compared with Student t test. The relationship between categorical variables was assessed using logistic regression with odds ratios and CIs. Survival analyses were performed using Cox regression adjusted for age and sex and expressed as hazard ratios (HRs). Annual event rates were calculated by dividing the number of events by the number of person‐years at risk. A 2‐tailed P <0.05 was considered statistically significant. All statistical analyses were performed using SPSS, version 27 (SPSS Inc, Chicago, IL). Patient Characteristics During the study period, 1753 patients underwent CCTA for suspicion of CAD and were included in the final study population. The baseline characteristics are presented in Table . The results of the CCTA and other imaging findings are summarized in Table . Downstream Impact of PTP Assessment by ESC 2019 Guidelines The change of estimated patient PTP between 2013 and 2019 ESC guidelines and the downstream diagnostic yield and revascularizations are described in Figure and Table . In brief, 857 patients in the 2013 test group were reclassified, 188 patients to defer and 669 patients to consider test. In these reclassified patients, the diagnostic yield of obstructive CAD was 4.7% (2.7% in the defer group and 5.3% in the consider test group). The revascularization rate was lower in the 2019 defer group than in the test group (2.6% versus 12.4%; P <0.001), whereas no significant difference was seen between the defer and consider test groups (2.6% versus 4.0%; P =0.244). PTP , CCTA , and Adverse Events at Follow‐Up A total of 67 patients died during the study (4.1%). The composite adverse event of cardiac death, nonfatal MI, or hospitalization attributable to unstable angina occurred in 53 patients (3.3%) during the median follow‐up of 45 (27–85) months. Event rates are shown in Table , and the event‐free survival is illustrated by Kaplan‐Meier curves in Figures and . The patients with no atherosclerosis had a low annual risk of the composite adverse event (0.29% [95% CI, 0.083%–0.503%]). Incrementally higher rates of events were seen in nonobstructive (1.44% [95% CI, 0.794%–2.08%]) and obstructive CAD (2.72% [95% CI, 1.04%–4.40%]) compared with those with no atherosclerosis on CCTA (HR, 5.45 [95% CI, 2.94–10.1]; P <0.001; and HR, 13.5 [95% CI, 6.88–26.6]; P <0.001, respectively); this association remained significant after adjusting for age and sex in the Cox regression model (HR, 3.07 [95% CI, 1.59–5.5]; P <0.001; and HR, 6.64 [95% CI, 3.16–14.0]; P <0.001, respectively). Survival of ESC 2013 and 2019 PTP groups is illustrated in Figure . Event rates were higher in the 2019 than in the 2013 test group but not in the consider test group. A total of 9 cardiac deaths were observed in the consider test group (annual rate, 0.35%). In the group of 857 patients reclassified from the 2013 test group to the 2019 defer or consider test groups, the annual rate of cardiac death was 0.39% (0.20% for defer and 0.44% for consider test). During the study period, 1753 patients underwent CCTA for suspicion of CAD and were included in the final study population. The baseline characteristics are presented in Table . The results of the CCTA and other imaging findings are summarized in Table . PTP Assessment by ESC 2019 Guidelines The change of estimated patient PTP between 2013 and 2019 ESC guidelines and the downstream diagnostic yield and revascularizations are described in Figure and Table . In brief, 857 patients in the 2013 test group were reclassified, 188 patients to defer and 669 patients to consider test. In these reclassified patients, the diagnostic yield of obstructive CAD was 4.7% (2.7% in the defer group and 5.3% in the consider test group). The revascularization rate was lower in the 2019 defer group than in the test group (2.6% versus 12.4%; P <0.001), whereas no significant difference was seen between the defer and consider test groups (2.6% versus 4.0%; P =0.244). , CCTA , and Adverse Events at Follow‐Up A total of 67 patients died during the study (4.1%). The composite adverse event of cardiac death, nonfatal MI, or hospitalization attributable to unstable angina occurred in 53 patients (3.3%) during the median follow‐up of 45 (27–85) months. Event rates are shown in Table , and the event‐free survival is illustrated by Kaplan‐Meier curves in Figures and . The patients with no atherosclerosis had a low annual risk of the composite adverse event (0.29% [95% CI, 0.083%–0.503%]). Incrementally higher rates of events were seen in nonobstructive (1.44% [95% CI, 0.794%–2.08%]) and obstructive CAD (2.72% [95% CI, 1.04%–4.40%]) compared with those with no atherosclerosis on CCTA (HR, 5.45 [95% CI, 2.94–10.1]; P <0.001; and HR, 13.5 [95% CI, 6.88–26.6]; P <0.001, respectively); this association remained significant after adjusting for age and sex in the Cox regression model (HR, 3.07 [95% CI, 1.59–5.5]; P <0.001; and HR, 6.64 [95% CI, 3.16–14.0]; P <0.001, respectively). Survival of ESC 2013 and 2019 PTP groups is illustrated in Figure . Event rates were higher in the 2019 than in the 2013 test group but not in the consider test group. A total of 9 cardiac deaths were observed in the consider test group (annual rate, 0.35%). In the group of 857 patients reclassified from the 2013 test group to the 2019 defer or consider test groups, the annual rate of cardiac death was 0.39% (0.20% for defer and 0.44% for consider test). This retrospective cohort study demonstrates that the updated ESC 2019 guideline for chronic CAD has a considerable impact on the diagnostic evaluation of suspected CAD. Our findings have significant clinical considerations on effective health care resource use and radiation burden at the population level. In our study, the revised PTP model of the 2019 ESC guideline classified 390 patients (24%) as having low PTP (<5%) in whom diagnostic imaging should be deferred. Compared with the older 2013 ESC guideline, the updated PTP assessment reclassified 66% of the patients from the 2013 test group to the 2019 defer or consider test groups, which suggests that 66% of these CCTA studies could have been deferred. Overall, only 28% of the referred patients were classified as having intermediate PTP and recommended for direct CCTA. Safety of Deferral of CCTA in Low PTP Revascularization rates were low and comparable in the 2019 defer and consider test groups (2.3% versus 3.7%, respectively). Compared with the defer group, the rate of cardiac death was higher in the test group but not in the consider test group, where annual cardiac mortality remained low at 0.45%. Our results further validate that CCTA can be safely deferred in these low‐risk patients who are subject to overuse of medical imaging and invasive procedures. Patients with nonobstructive or obstructive CAD have an elevated risk of adverse cardiac events. These were mainly attributable to MI and unstable angina, which precipitated early after CCTA in the patients with obstructive CAD, before access to invasive coronary angiography or after revascularization. However, the progression of CAD to clinical events was slower in the patients with nonobstructive CAD. Nonobstructive CAD was seen in 28% of the patients in the consider test group and 11% in the defer group. These patients would benefit from preventive therapy, but such patients cannot be identified with functional testing or from clinical evaluation alone. In asymptomatic patients, a cardiovascular risk scoring system, such as the Framingham risk score, should be used instead to decide on preventive therapy. Comparison to Randomized Clinical Trial Cohorts The baseline risk factors of this study population were similar to the patient characteristics reported in the PROMISE (Prospective Multicenter Imaging Study for Evaluation of Chest Pain) and the SCOT‐HEART (Scottish Computed Tomography of the Heart) trial cohorts. The percentage of patients with typical anginal chest pain (17.3%) was closer to that of the PROMISE cohort (11.7%) than SCOT‐HEART cohort (33.4%), which is consistent with the lower yield of obstructive disease. The use of exercise stress testing was less frequent than in SCOT‐HEART (52% compared with 85%). The PTP of obstructive CAD was significantly lower than in previous prospective CCTA cohorts (ESC‐2013 PTP, 31±18% versus 53±21% in the PROMISE trial). A meta‐analysis of 4 randomized clinical trials, including the SCOT‐HEART and PROMISE cohorts, showed that CCTA led to increased downstream testing, increased revascularization, and fewer MIs compared with standard care. In randomized clinical trial settings, the risk profile has been higher than in real‐world populations, with the average age at 60 years and an 18% prevalence of diabetes. Our observational study shows that in a real‐world population, the need for revascularization is low (5.6%), in contrast to 7.9% in the randomized clinical trial meta‐analysis. In a subgroup of patients from PROMISE and SCOT‐HEART classified as low risk according to 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS Guideline for the Diagnosis and Management of Patients With Stable Ischemic Heart Disease, 2013 ESC, and 2016 The National Institute for Health and Care Excellence guidelines, the rate of revascularization was as low as 0.7% to 2.0%, marking a group of patients who gain limited benefit from noninvasive testing. The superiority of CCTA over exercise stress testing (decreased downstream testing and fewer MIs) has been shown consistently in meta‐analyses comparing individual diagnostic tests, where stress echocardiography and radionuclide imaging remain superior to CCTA. , In the acute setting of low‐risk acute coronary syndrome, an initial strategy of CCTA led to increased downstream invasive coronary angiography and no clear advantage in the reduction of MI, perhaps attributable to the low baseline risk of revascularization (3%). , Study Limitations Although we have shown that CCTA can be safely deferred in patients with low PTP according to the 2019 ESC guidelines, some patients might benefit from early definite rule‐out of CAD to prevent a cascade of diagnostic testing and readmissions. Our study's prevalence of adverse events was low because of low overall PTP for CAD, but it represents an unselected real‐world population referred to CCTA. Most important, the primary aim of our study was to investigate whether deferral of CCTA in this low PTP population is safe. We used the Finnish national registry to ensure the collection of all adverse events in our population. Hospitalization attributable to unstable angina and revascularization could be considered weak clinical end points, but we considered them essential from the point of view of deferring initial upstream testing. With the increased knowledge of nonobstructive CAD on its preventive treatments, the risk of adverse events could be even lower if we were to repeat our study. Our retrospective study setting might impact the estimation of the patient symptoms compared with prospective interviews. A more sophisticated PTP estimation integrating patient risk factors, exercise stress testing, or calcium scoring might further improve the reclassification of patient selection for diagnostic CAD imaging. One such algorithm could be the PROMISE Minimal‐Risk Tool, which was used successfully in an integrated approach in the PRECISE (Prospective Randomized Trial of the Optimal Evaluation of Cardiac Symptoms and Revascularization) trial to reduce invasive coronary angiography without obstructive CAD and to increase medication use. We selected studies obtained using a modern computed tomography system to increase the generalizability of our results. CCTA in Low PTP Revascularization rates were low and comparable in the 2019 defer and consider test groups (2.3% versus 3.7%, respectively). Compared with the defer group, the rate of cardiac death was higher in the test group but not in the consider test group, where annual cardiac mortality remained low at 0.45%. Our results further validate that CCTA can be safely deferred in these low‐risk patients who are subject to overuse of medical imaging and invasive procedures. Patients with nonobstructive or obstructive CAD have an elevated risk of adverse cardiac events. These were mainly attributable to MI and unstable angina, which precipitated early after CCTA in the patients with obstructive CAD, before access to invasive coronary angiography or after revascularization. However, the progression of CAD to clinical events was slower in the patients with nonobstructive CAD. Nonobstructive CAD was seen in 28% of the patients in the consider test group and 11% in the defer group. These patients would benefit from preventive therapy, but such patients cannot be identified with functional testing or from clinical evaluation alone. In asymptomatic patients, a cardiovascular risk scoring system, such as the Framingham risk score, should be used instead to decide on preventive therapy. The baseline risk factors of this study population were similar to the patient characteristics reported in the PROMISE (Prospective Multicenter Imaging Study for Evaluation of Chest Pain) and the SCOT‐HEART (Scottish Computed Tomography of the Heart) trial cohorts. The percentage of patients with typical anginal chest pain (17.3%) was closer to that of the PROMISE cohort (11.7%) than SCOT‐HEART cohort (33.4%), which is consistent with the lower yield of obstructive disease. The use of exercise stress testing was less frequent than in SCOT‐HEART (52% compared with 85%). The PTP of obstructive CAD was significantly lower than in previous prospective CCTA cohorts (ESC‐2013 PTP, 31±18% versus 53±21% in the PROMISE trial). A meta‐analysis of 4 randomized clinical trials, including the SCOT‐HEART and PROMISE cohorts, showed that CCTA led to increased downstream testing, increased revascularization, and fewer MIs compared with standard care. In randomized clinical trial settings, the risk profile has been higher than in real‐world populations, with the average age at 60 years and an 18% prevalence of diabetes. Our observational study shows that in a real‐world population, the need for revascularization is low (5.6%), in contrast to 7.9% in the randomized clinical trial meta‐analysis. In a subgroup of patients from PROMISE and SCOT‐HEART classified as low risk according to 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS Guideline for the Diagnosis and Management of Patients With Stable Ischemic Heart Disease, 2013 ESC, and 2016 The National Institute for Health and Care Excellence guidelines, the rate of revascularization was as low as 0.7% to 2.0%, marking a group of patients who gain limited benefit from noninvasive testing. The superiority of CCTA over exercise stress testing (decreased downstream testing and fewer MIs) has been shown consistently in meta‐analyses comparing individual diagnostic tests, where stress echocardiography and radionuclide imaging remain superior to CCTA. , In the acute setting of low‐risk acute coronary syndrome, an initial strategy of CCTA led to increased downstream invasive coronary angiography and no clear advantage in the reduction of MI, perhaps attributable to the low baseline risk of revascularization (3%). , Although we have shown that CCTA can be safely deferred in patients with low PTP according to the 2019 ESC guidelines, some patients might benefit from early definite rule‐out of CAD to prevent a cascade of diagnostic testing and readmissions. Our study's prevalence of adverse events was low because of low overall PTP for CAD, but it represents an unselected real‐world population referred to CCTA. Most important, the primary aim of our study was to investigate whether deferral of CCTA in this low PTP population is safe. We used the Finnish national registry to ensure the collection of all adverse events in our population. Hospitalization attributable to unstable angina and revascularization could be considered weak clinical end points, but we considered them essential from the point of view of deferring initial upstream testing. With the increased knowledge of nonobstructive CAD on its preventive treatments, the risk of adverse events could be even lower if we were to repeat our study. Our retrospective study setting might impact the estimation of the patient symptoms compared with prospective interviews. A more sophisticated PTP estimation integrating patient risk factors, exercise stress testing, or calcium scoring might further improve the reclassification of patient selection for diagnostic CAD imaging. One such algorithm could be the PROMISE Minimal‐Risk Tool, which was used successfully in an integrated approach in the PRECISE (Prospective Randomized Trial of the Optimal Evaluation of Cardiac Symptoms and Revascularization) trial to reduce invasive coronary angiography without obstructive CAD and to increase medication use. We selected studies obtained using a modern computed tomography system to increase the generalizability of our results. This study was funded by research grants from the Finnish Cultural Foundation and the Finnish Medical Foundation. Dr Uusitalo reports lecture fees and advisory board activity with Pfizer, and lecture fees and advisory board activity with GE HealthCare. The remaining authors have no disclosures to report.
Can Precision Oncology Benefit Patients With Cancers of Unknown Primary?
c976ab1b-3da0-4197-985f-7f5d74496e96
10546809
Internal Medicine[mh]
Proteomic dementia risk assessment in adults with diabetes
dd5248b4-1f09-42aa-96c4-fe87632f7b0e
11715217
Biochemistry[mh]
Best Practices for Designing and Testing Behavioral and Health Communication Interventions for Delivery in Private Facebook Groups: Tutorial
a94725e2-0fd9-436b-aaed-15f686e8a9db
11411228
Health Communication[mh]
The vast majority of the US population, both adults and youth, spends a substantial amount of time online, with social media platforms accounting for a major portion of this time . For example, Facebook (Meta), the most popular social media platform in the United States, is used by 239 million US adults, which represents 71% of the population . In 2023, Facebook users spent on average 30 minutes per day on the platform . In a survey of US adults about the top 3 mobile apps they felt they could least do without, Facebook was the most frequently cited app . To increase the richness of users’ experiences on Facebook, in 2010, Facebook launched groups, which are private spaces for people to come together around specific topics of interest (eg, hobbies and health). In 2017, Facebook founder, Mark Zuckerberg, announced an increased investment in Facebook groups, explaining, “building a global community that works for everyone starts with the millions of smaller communities and intimate social structures we turn to for our personal, emotional and spiritual needs.” This increased investment in groups resulted in a quadrupling of Facebook groups in the subsequent 2 years . By 2020, estimates suggested that 1.8 billion Facebook users were group members of approximately 50 to 100 million groups, and half of all Facebook users were members of ≥5 groups . Given the popularity of Facebook, the time users spend on the platform, and the emergence of private groups, Facebook presents a unique opportunity to deliver evidence-based behavioral interventions in a way that reaches a large audience by meeting people where they are. Many Facebook users already use the platform to discuss health as evidenced by the vast ecosystem of health groups on Facebook. In 2017, Facebook reported to have 6 million health groups containing >70 million members . Health groups on Facebook are typically created by a patient for the purpose of connecting users who have similar health conditions . Research on health groups on Facebook reveals they span a wide range of health topics including diabetes , cancer , hypertension , mental health , genetic disorders , sexually transmitted diseases , and long COVID (post–COVID-19 condition) , to name a few. Studies of participants of these groups show that they use groups to share personal experiences with a health condition, exchange information about their condition and treatments, and give and receive emotional support . Further evidence that patients are using Facebook to discuss and manage their health comes from a study of 2508 US adult Facebook users that found that more than two-thirds (69%) of the users posted about health at least once in the past year and 38% posted about health at least once per month . This appears to exceed the number of people who use mobile health apps: only 20% of men and 17% of women report to be currently using a health app . As such, popular social media platforms such as Facebook may be more conducive to creating a “community of health” than health apps that tend to be narrowly focused on a single health behavior or condition (eg, weight loss) and do not always include social features that allow the formation of groups where patients can coalesce around specific health topics as is the case on Facebook. Because of the reach and ease of use of Facebook, it is increasingly being used as a modality by which to deliver behavioral and health communication interventions . Typically, a Facebook-delivered intervention involves investigators creating a private group and delivering a feed of intervention content for participants to engage with asynchronously. Often a counselor is present and facilitates discussions and provides feedback and support. Any Facebook user can start a group for free and a user who starts a group is referred to as the administrator (ie, “admin”) and this individual may choose to recruit a moderator (ie, “mod”) whose role is to assist with gatekeeping for membership as well as moderating content posted by members. Although Facebook groups can be public or private, for the purposes of developing and testing interventions in the context of research, private groups are recommended to protect the confidentiality of participants and create a confidential space for them to share their experiences and opinions. To be sure, health communication campaigns delivered in the real world occur on public platforms (eg, social media messaging on public accounts, media advertisements, billboards, and community settings), but the use of private Facebook groups to evaluate the efficacy of a health communication campaign or a messaging strategy allows one to examine participants’ reactions to and engagement with messaging in a controlled setting before implementing in real-world public settings. Private Facebook groups are joined via invitation only, which allows admin control over entry, thereby reducing privacy risks. Members of Facebook groups can post in the group at any time, and all posts to the group by admins or any member appear in the newsfeeds of group members based on Facebook’s content algorithm . Facebook also provides an option for group members to receive notifications when new posts are made, which cues members to visit the group to see new activity. Facebook groups have myriad features designed to assist the admin in running a group and to facilitate meaningful engagement in the group . For example, admins can schedule posts in advance, which is a useful feature for behavioral interventions because they typically involve a collection of posts (ie, content library) that are posted over an extended period. Posts to the group can include text, images, videos, documents, or polls. Admins can also create a repository of document files in the group for members to access at any time. This is a useful feature for investigators who want to distribute handouts, worksheets, or other resources to participants. Facebook groups also allow admins to create chats on specific topics with subsets of group members and host live events such as question and answer sessions, webinars, or meetings within the private group interface. This allows investigators to incorporate synchronous intervention content directly within the Facebook group, precluding the need to leverage other platforms or videoconferencing technologies. To reinforce member engagement, Facebook provides an array of badges for group members. Group members can earn badges for being a new member, a frequent conversation starter, a founding member, a frequent contributor to conversation threads, and a “visual storyteller” by frequently posting images or videos. Badges appear next to the user’s name wherever they post or comment. Finally, group admins have access to Facebook Group Insights, which allows them to monitor group activity, including post engagement, a list of the most active members, and popular times for posting. Investigators can leverage these features in creative ways to engage members in behavioral strategies and encourage them to engage with and support each other. Overview Studies of Facebook-delivered interventions have been conducted for weight loss , healthy diet , physical activity , maternal care , caregiver mental health , postpartum depression , preventing cancer recurrence , vaccine hesitancy , smoking cessation , and HIV prevention , among other topics. Results vary widely in terms of clinical outcomes as well as participant engagement. Engagement tends to be a predictor of outcomes , but best practices for engaging participants in Facebook groups are lacking . Best practices for adapting and designing behavioral intervention content for asynchronous delivery in Facebook groups are also lacking. We previously described a process for adapting existing behavioral interventions for delivery via social media broadly, with guidance on platform selection, inclusion criteria, content creation, interventionist training, and data reporting . We now build on that work by proposing best practices specifically for the use of private Facebook groups for the delivery of behavioral and health communication interventions, including converting traditional intervention content into Facebook posts; creating onboarding, counseling, engagement, and data management protocols; designing and branding intervention content; and using data to optimize engagement and outcomes . provides a glossary of terms. Glossary of Facebook group intervention terms. Keywords and definitions Group moderator Person who reviews all group member posts and comments to ensure they meet community standards. Group administrator (admin) Person who started the group and establishes the community standards. They may also play the role of moderator. Content library The entire collection of intervention posts. Discussion thread A Facebook post that starts a discussion and the collection of replies made by group members in response to the post. Poll A Facebook post that allows group members to answer a multiple choice question or vote from a list of choices. Call to action The segment of a Facebook post that asks participants to respond in a certain way (eg, answer a question, share an opinion). Brand identity The visual elements (eg, name, logo, images, fonts, color palette) of intervention content that create a recognizable identity for the intervention posts so that group members can easily identify intervention posts in their feed. Brand kit The collection of fonts, color palettes, and graphic assets that are used to express the brand identity in intervention posts. Microcounseling or asynchronous counseling A form of counseling that occurs in Facebook groups in which counselors engage with group members asynchronously in discussion threads. Engagement Any visible evidence of participation by group members including reactions (eg, “likes”), comments, replies to comments, poll votes, or original posts on the group wall. Engagement protocol A protocol that outlines steps for re-engaging group members who have not engaged in a specified period. Tagging Tagging is a written mention of a group member’s name which then results in that member receiving a notification that they have been mentioned by someone in the group. Clicking on the notification will lead them to the post in which they were mentioned. Identifying Your Intervention Type Overview Facebook groups are not an intervention in themselves; rather, they serve as a platform through which interventions can be delivered. A group with no content is simply a web-based gathering space, which in itself is unlikely to have an intervention effect on any behavioral or clinical outcomes. The first step in developing a Facebook-delivered intervention is to identify the type of intervention to be delivered. The 2 types of interventions that can be delivered are behavioral interventions and health communication interventions. Behavioral Interventions A behavioral intervention is a collection of behavioral strategies that teach people essential skills for developing adaptive behaviors or discontinuing maladaptive behaviors with the ultimate goal of impacting a clinical outcome . Behavioral interventions often have intensive protocols and traditionally have involved multiple visits with a behavioral provider. Examples include the Diabetes Prevention Program (DPP) Lifestyle Intervention , behavioral activation for depression , and cognitive behavioral therapy for insomnia . Some behavioral interventions require licensed mental health professionals to deliver (ie, treatments for mental disorders) while others (eg, lifestyle interventions) can be delivered by trained paraprofessionals (eg, health coaches). Behavioral interventions provide intensive support because they target health issues that require the adoption of many new habits. For example, lifestyle interventions target myriad habits relating to diet and physical activity, as well as habits relating to time management, problem-solving, goal setting, and planning . Behavioral interventions are typically administered to individuals or small groups in person or via telehealth, SMS text messaging, mobile apps, and web-based platforms, and they can last weeks or months and even up to a year or more. They are lengthy in duration because participants need time to learn and practice new skills, set goals, troubleshoot obstacles, receive feedback and guidance, and build consistency and support to adopt and maintain the new habits they are developing long-term. Although behavioral interventions are long in duration, relapse and backsliding are common when interventions are terminated, making long-term maintenance of behavior change an elusive goal . A promising feature of Facebook groups is that once built and a critical mass of engaged members coalesces, they have the potential to be self-sustaining for years, which may help people tackle issues that arise over time and compromise long-term maintenance. Health Communication Interventions Behavioral interventions can be contrasted with health communication interventions that use repeated messaging to prompt an audience to engage in a healthy behavior, such as vaccination, mask wearing, or cancer screenings. Health communication interventions also impact clinical outcomes and are often focused on disease prevention. Health communication interventions tend to involve repeated health messaging that is designed to increase a target population’s motivation to take a health action (eg, vaccine) by increasing the perceived risk of the disease and perceived benefits of the health behavior, providing reminders, providing countermessaging to combat misconceptions, and providing resources that eliminate barriers (eg, a link to make a vaccine appointment at a local pharmacy), among other strategies. They may also be used to connect people to behavioral interventions. For example, the Truth Initiative’s This is Quitting campaign disseminates videos and print materials (the health communication intervention) to connect youth who use e-cigarettes to a text-based e-cigarette cessation intervention (the behavioral intervention) . Health communication campaigns can occur over lengthy periods as well and often leverage print materials, billboards, social media, media, or opinion leaders and influencers to disseminate messaging to a large audience. A Facebook group for a health communication intervention is a vehicle to disseminate repeated messaging on a topic and may be used to test different messaging strategies (eg, gain- vs loss-framed messages). The distinction between behavioral interventions and health communication interventions is important from an intervention development perspective. Behavioral interventions require complex skill building and support to change habits; thus, people who enroll in them are typically seeking help for the target condition, and as such, have some level of motivation to change. For example, people who enroll in weight loss interventions do so because they want to lose weight. However, health communication interventions are designed to reach people who are not necessarily actively seeking help, including people who may not believe the health behavior is even relevant to them or healthy at all (eg, vaccine skeptics). For example, influenza vaccine campaigns are designed to reach people who have insufficient motivation to get the influenza vaccine. This motivational distinction is important because people who are not motivated to engage in a healthy behavior are unlikely to join a Facebook group on that topic; however, people who are motivated to change but need help to do so will be far more likely to join a Facebook group on that topic. As such, for the health communication intervention, the Facebook group topic should be one that is sufficiently engaging to the target audience to motivate them to enroll. For example, a Facebook-delivered health communication intervention aimed to decrease mothers’ willingness to allow their teen daughters to use tanning beds via a Facebook group that was themed on the broader topic of teen health . Only 15% of intervention posts were relevant to tanning beds, whereas the remainder of the intervention posts covered a host of health topics that moms rated as high interest in pretrial focus groups . Once the investigator determines whether the intervention is behavioral or health communication, the next step is to develop the intervention protocol that includes 6 components: a content library, branding and graphic design, an onboarding protocol, a counseling protocol, an engagement protocol, and a data management protocol. Interventionists using attention-control Facebook groups in their trials will need to develop similar protocols for their attention-control group as has been done elsewhere . Content Library for Behavioral Interventions Interventionists may want to develop a Facebook-delivered version of a behavioral intervention that has established efficacy when delivered via other modalities such as in-person visits or telephone; alternatively, they may want to develop a new intervention that is to be delivered for the first time via Facebook groups. A review of the intervention literature is a necessary first step to determine if evidence-based interventions for the target health issue exist, and if so, building off of this literature is recommended. For example, if an investigator is interested in helping a specific population segment with depression, a review of the literature will reveal that many depression treatments have been tested using a variety of modalities . The first step then would be to use protocols for existing interventions as a starting point and if cultural or other tailoring is necessary, doing the proper developmental work to finalize the intervention content so that it can be converted into a format for Facebook delivery. If an intervention has already been delivered via a Facebook group in other studies, as in the case of lifestyle interventions , the investigator should then design an intervention protocol that improves upon weaknesses identified in previous studies. If no behavioral interventions exist for the target health issue, the interventionist should take the proper steps to conduct behavioral intervention development, which have been described elsewhere . If the investigator does not have expertise in behavioral intervention development, partnering with a content expert is highly recommended for a first attempt at developing a behavioral intervention. A proper behavioral intervention protocol includes weekly modules that contain learning objectives and content to be covered for each week of the intervention (eg, behavioral strategies, discussion topics, and homework), as these will be the foundation by which to produce a content library of Facebook posts. Traditional behavioral interventions have intervention protocols, but when developing a new intervention, drafting an intervention protocol is recommended before attempting to create Facebook posts because it should be used to guide the development of intervention posts . Converting Intervention Content Into Facebook Posts We will use the 16-module DPP Lifestyle Intervention as an example for converting a behavioral intervention into Facebook posts. Each module of the DPP has learning objectives and facilitator and participant materials meant to be delivered in a 60- to 90-minute session. With this foundation, the intervention protocol can be converted into any number of modalities including a Facebook group. We converted each of the 16 DPP modules into 1 week of posts (14 posts) that are meant to be distributed 2 per day, 1 in the morning and 1 in the evening. This resulted in a library of 224 posts. In , we provide examples of how learning objectives were translated into Facebook posts. Once each learning objective is reflected in a Facebook post or posts, we recommend having a second investigator who has experience with the intervention to independently review the posts to verify that the learning objectives are adequately met to ensure that the Facebook posts have high treatment fidelity. Like many behavioral interventions, each DPP module includes recurring behavioral strategies (ie, weigh-ins, goal-setting, check-ins on the previous week’s goals, and problem-solving). We designed 4 posts that occur at the same time every week for these recurring behavioral strategies so that participants are prompted to engage in them every week. One benefit of asynchronously delivered interventions is that the interventionist can select optimal days for posts to occur depending on the nature of the post. We selected specific days and times for the recurring posts to appear each week to allow participants to engage in these strategies at times of the week when they would benefit the most. For example, the goal setting post appears every Monday morning so that participants receive diet and physical activity goals at the beginning of the week and have the entire week to work on them. The post that checks in on how participants did on their weekly goals appears each Sunday evening, the last day before they receive the following week’s goal. This allows participants to share how they did with their goals, obstacles that got in the way, and a plan for overcoming those obstacles in the following week to prepare them for a more successful week. The “weigh-in” post appears each Friday morning (the last weekday of the week) so that participants are cognizant of their weight before the weekend begins that may offset weekend overconsumption, which is common . The problem-solving post appears on Wednesday mornings, which is a point at which participants have had a couple of days to work on their weekly goals and may be encountering obstacles that if solved before the week is over could set them up for a successful week. The regularity of these posts provides participant with a predictable structure for the intervention, which is important when intervention content is delivered asynchronously and in such small segments relative to traditional formats where the entire module would be delivered via a 60- to 90-minute synchronous discussion. The remaining 10 posts of the week are designed to tackle the learning objectives of each module while leveraging content in the protocol for that module. The theme of the week is announced in the first post of the week to make participants aware of the discussion topic for that week. Content Library for Health Communication Interventions Health communication interventions typically involve a set of messages that are developed based on a conceptual model and meant to be delivered repeatedly over some period. For example, a health communication intervention based on Prospect Theory might set out to compare gain- versus loss-framed messages about a health behavior . In one study, this was accomplished by randomizing participants to Facebook groups that used gain- or loss-framed messages to improve physical activity motivation . Alternatively, a health communication intervention based on Transportation Theory might set out to evaluate the effectiveness of narrative-based messaging about a health behavior . Social marketing principles, which address how to influence a target audience by developing messaging that resonates with their values, can also provide a guiding framework . Specifically, this framework outlines steps, including identifying a target audience and its unique barriers to the target behavior, gathering data on the target audience’s values, developing messaging that resonates with the audience’s values, and disseminating messaging using channels used by a high proportion of the audience. These are just a few of the many conceptual models that have been used in health communication interventions . Once an investigator has identified a conceptual model, the next step is to begin drafting a library of messages that reflect the appropriate theoretical constructs. As recommended earlier, the entire feed in a health communication intervention should not be exclusively focused on the target health behavior because people who are not motivated to learn about that behavior may be unlikely to join. For example, a Facebook group on vaccines would not attract the interest of people who are vaccine hesitant, in which case 2 problems are likely to occur. First, recruitment may be slow and difficult to accomplish. Second, the recruited sample is unlikely to be representative of the target population because people who volunteer to participate in a group solely focused on vaccines will naturally have more interest in vaccines. This increases the risk of ceiling effects that reduce the power to detect an intervention effect. Preliminary work that queries a representative sample of the target population on topics of interest is an approach that has been successful in previous work . A broad topic that encompasses the target behavior will allow messaging about the target behavior to fit into the content feed while also not overpowering it. Once a topic is identified, the investigator will need to develop messaging on that topic. In our previous studies, 15% of messages were on the target behavior and 85% of messages were on the broader topic and this was sufficient to produce an intervention effect . The investigators may or may not use theoretically-based messaging strategies in posts relating to the broad topic but doing so gives those messages scientific value, makes the feed consistent, and may provide interesting data for secondary analyses. Studies of Facebook-delivered interventions have been conducted for weight loss , healthy diet , physical activity , maternal care , caregiver mental health , postpartum depression , preventing cancer recurrence , vaccine hesitancy , smoking cessation , and HIV prevention , among other topics. Results vary widely in terms of clinical outcomes as well as participant engagement. Engagement tends to be a predictor of outcomes , but best practices for engaging participants in Facebook groups are lacking . Best practices for adapting and designing behavioral intervention content for asynchronous delivery in Facebook groups are also lacking. We previously described a process for adapting existing behavioral interventions for delivery via social media broadly, with guidance on platform selection, inclusion criteria, content creation, interventionist training, and data reporting . We now build on that work by proposing best practices specifically for the use of private Facebook groups for the delivery of behavioral and health communication interventions, including converting traditional intervention content into Facebook posts; creating onboarding, counseling, engagement, and data management protocols; designing and branding intervention content; and using data to optimize engagement and outcomes . provides a glossary of terms. Glossary of Facebook group intervention terms. Keywords and definitions Group moderator Person who reviews all group member posts and comments to ensure they meet community standards. Group administrator (admin) Person who started the group and establishes the community standards. They may also play the role of moderator. Content library The entire collection of intervention posts. Discussion thread A Facebook post that starts a discussion and the collection of replies made by group members in response to the post. Poll A Facebook post that allows group members to answer a multiple choice question or vote from a list of choices. Call to action The segment of a Facebook post that asks participants to respond in a certain way (eg, answer a question, share an opinion). Brand identity The visual elements (eg, name, logo, images, fonts, color palette) of intervention content that create a recognizable identity for the intervention posts so that group members can easily identify intervention posts in their feed. Brand kit The collection of fonts, color palettes, and graphic assets that are used to express the brand identity in intervention posts. Microcounseling or asynchronous counseling A form of counseling that occurs in Facebook groups in which counselors engage with group members asynchronously in discussion threads. Engagement Any visible evidence of participation by group members including reactions (eg, “likes”), comments, replies to comments, poll votes, or original posts on the group wall. Engagement protocol A protocol that outlines steps for re-engaging group members who have not engaged in a specified period. Tagging Tagging is a written mention of a group member’s name which then results in that member receiving a notification that they have been mentioned by someone in the group. Clicking on the notification will lead them to the post in which they were mentioned. Overview Facebook groups are not an intervention in themselves; rather, they serve as a platform through which interventions can be delivered. A group with no content is simply a web-based gathering space, which in itself is unlikely to have an intervention effect on any behavioral or clinical outcomes. The first step in developing a Facebook-delivered intervention is to identify the type of intervention to be delivered. The 2 types of interventions that can be delivered are behavioral interventions and health communication interventions. Behavioral Interventions A behavioral intervention is a collection of behavioral strategies that teach people essential skills for developing adaptive behaviors or discontinuing maladaptive behaviors with the ultimate goal of impacting a clinical outcome . Behavioral interventions often have intensive protocols and traditionally have involved multiple visits with a behavioral provider. Examples include the Diabetes Prevention Program (DPP) Lifestyle Intervention , behavioral activation for depression , and cognitive behavioral therapy for insomnia . Some behavioral interventions require licensed mental health professionals to deliver (ie, treatments for mental disorders) while others (eg, lifestyle interventions) can be delivered by trained paraprofessionals (eg, health coaches). Behavioral interventions provide intensive support because they target health issues that require the adoption of many new habits. For example, lifestyle interventions target myriad habits relating to diet and physical activity, as well as habits relating to time management, problem-solving, goal setting, and planning . Behavioral interventions are typically administered to individuals or small groups in person or via telehealth, SMS text messaging, mobile apps, and web-based platforms, and they can last weeks or months and even up to a year or more. They are lengthy in duration because participants need time to learn and practice new skills, set goals, troubleshoot obstacles, receive feedback and guidance, and build consistency and support to adopt and maintain the new habits they are developing long-term. Although behavioral interventions are long in duration, relapse and backsliding are common when interventions are terminated, making long-term maintenance of behavior change an elusive goal . A promising feature of Facebook groups is that once built and a critical mass of engaged members coalesces, they have the potential to be self-sustaining for years, which may help people tackle issues that arise over time and compromise long-term maintenance. Health Communication Interventions Behavioral interventions can be contrasted with health communication interventions that use repeated messaging to prompt an audience to engage in a healthy behavior, such as vaccination, mask wearing, or cancer screenings. Health communication interventions also impact clinical outcomes and are often focused on disease prevention. Health communication interventions tend to involve repeated health messaging that is designed to increase a target population’s motivation to take a health action (eg, vaccine) by increasing the perceived risk of the disease and perceived benefits of the health behavior, providing reminders, providing countermessaging to combat misconceptions, and providing resources that eliminate barriers (eg, a link to make a vaccine appointment at a local pharmacy), among other strategies. They may also be used to connect people to behavioral interventions. For example, the Truth Initiative’s This is Quitting campaign disseminates videos and print materials (the health communication intervention) to connect youth who use e-cigarettes to a text-based e-cigarette cessation intervention (the behavioral intervention) . Health communication campaigns can occur over lengthy periods as well and often leverage print materials, billboards, social media, media, or opinion leaders and influencers to disseminate messaging to a large audience. A Facebook group for a health communication intervention is a vehicle to disseminate repeated messaging on a topic and may be used to test different messaging strategies (eg, gain- vs loss-framed messages). The distinction between behavioral interventions and health communication interventions is important from an intervention development perspective. Behavioral interventions require complex skill building and support to change habits; thus, people who enroll in them are typically seeking help for the target condition, and as such, have some level of motivation to change. For example, people who enroll in weight loss interventions do so because they want to lose weight. However, health communication interventions are designed to reach people who are not necessarily actively seeking help, including people who may not believe the health behavior is even relevant to them or healthy at all (eg, vaccine skeptics). For example, influenza vaccine campaigns are designed to reach people who have insufficient motivation to get the influenza vaccine. This motivational distinction is important because people who are not motivated to engage in a healthy behavior are unlikely to join a Facebook group on that topic; however, people who are motivated to change but need help to do so will be far more likely to join a Facebook group on that topic. As such, for the health communication intervention, the Facebook group topic should be one that is sufficiently engaging to the target audience to motivate them to enroll. For example, a Facebook-delivered health communication intervention aimed to decrease mothers’ willingness to allow their teen daughters to use tanning beds via a Facebook group that was themed on the broader topic of teen health . Only 15% of intervention posts were relevant to tanning beds, whereas the remainder of the intervention posts covered a host of health topics that moms rated as high interest in pretrial focus groups . Once the investigator determines whether the intervention is behavioral or health communication, the next step is to develop the intervention protocol that includes 6 components: a content library, branding and graphic design, an onboarding protocol, a counseling protocol, an engagement protocol, and a data management protocol. Interventionists using attention-control Facebook groups in their trials will need to develop similar protocols for their attention-control group as has been done elsewhere . Facebook groups are not an intervention in themselves; rather, they serve as a platform through which interventions can be delivered. A group with no content is simply a web-based gathering space, which in itself is unlikely to have an intervention effect on any behavioral or clinical outcomes. The first step in developing a Facebook-delivered intervention is to identify the type of intervention to be delivered. The 2 types of interventions that can be delivered are behavioral interventions and health communication interventions. A behavioral intervention is a collection of behavioral strategies that teach people essential skills for developing adaptive behaviors or discontinuing maladaptive behaviors with the ultimate goal of impacting a clinical outcome . Behavioral interventions often have intensive protocols and traditionally have involved multiple visits with a behavioral provider. Examples include the Diabetes Prevention Program (DPP) Lifestyle Intervention , behavioral activation for depression , and cognitive behavioral therapy for insomnia . Some behavioral interventions require licensed mental health professionals to deliver (ie, treatments for mental disorders) while others (eg, lifestyle interventions) can be delivered by trained paraprofessionals (eg, health coaches). Behavioral interventions provide intensive support because they target health issues that require the adoption of many new habits. For example, lifestyle interventions target myriad habits relating to diet and physical activity, as well as habits relating to time management, problem-solving, goal setting, and planning . Behavioral interventions are typically administered to individuals or small groups in person or via telehealth, SMS text messaging, mobile apps, and web-based platforms, and they can last weeks or months and even up to a year or more. They are lengthy in duration because participants need time to learn and practice new skills, set goals, troubleshoot obstacles, receive feedback and guidance, and build consistency and support to adopt and maintain the new habits they are developing long-term. Although behavioral interventions are long in duration, relapse and backsliding are common when interventions are terminated, making long-term maintenance of behavior change an elusive goal . A promising feature of Facebook groups is that once built and a critical mass of engaged members coalesces, they have the potential to be self-sustaining for years, which may help people tackle issues that arise over time and compromise long-term maintenance. Behavioral interventions can be contrasted with health communication interventions that use repeated messaging to prompt an audience to engage in a healthy behavior, such as vaccination, mask wearing, or cancer screenings. Health communication interventions also impact clinical outcomes and are often focused on disease prevention. Health communication interventions tend to involve repeated health messaging that is designed to increase a target population’s motivation to take a health action (eg, vaccine) by increasing the perceived risk of the disease and perceived benefits of the health behavior, providing reminders, providing countermessaging to combat misconceptions, and providing resources that eliminate barriers (eg, a link to make a vaccine appointment at a local pharmacy), among other strategies. They may also be used to connect people to behavioral interventions. For example, the Truth Initiative’s This is Quitting campaign disseminates videos and print materials (the health communication intervention) to connect youth who use e-cigarettes to a text-based e-cigarette cessation intervention (the behavioral intervention) . Health communication campaigns can occur over lengthy periods as well and often leverage print materials, billboards, social media, media, or opinion leaders and influencers to disseminate messaging to a large audience. A Facebook group for a health communication intervention is a vehicle to disseminate repeated messaging on a topic and may be used to test different messaging strategies (eg, gain- vs loss-framed messages). The distinction between behavioral interventions and health communication interventions is important from an intervention development perspective. Behavioral interventions require complex skill building and support to change habits; thus, people who enroll in them are typically seeking help for the target condition, and as such, have some level of motivation to change. For example, people who enroll in weight loss interventions do so because they want to lose weight. However, health communication interventions are designed to reach people who are not necessarily actively seeking help, including people who may not believe the health behavior is even relevant to them or healthy at all (eg, vaccine skeptics). For example, influenza vaccine campaigns are designed to reach people who have insufficient motivation to get the influenza vaccine. This motivational distinction is important because people who are not motivated to engage in a healthy behavior are unlikely to join a Facebook group on that topic; however, people who are motivated to change but need help to do so will be far more likely to join a Facebook group on that topic. As such, for the health communication intervention, the Facebook group topic should be one that is sufficiently engaging to the target audience to motivate them to enroll. For example, a Facebook-delivered health communication intervention aimed to decrease mothers’ willingness to allow their teen daughters to use tanning beds via a Facebook group that was themed on the broader topic of teen health . Only 15% of intervention posts were relevant to tanning beds, whereas the remainder of the intervention posts covered a host of health topics that moms rated as high interest in pretrial focus groups . Once the investigator determines whether the intervention is behavioral or health communication, the next step is to develop the intervention protocol that includes 6 components: a content library, branding and graphic design, an onboarding protocol, a counseling protocol, an engagement protocol, and a data management protocol. Interventionists using attention-control Facebook groups in their trials will need to develop similar protocols for their attention-control group as has been done elsewhere . Interventionists may want to develop a Facebook-delivered version of a behavioral intervention that has established efficacy when delivered via other modalities such as in-person visits or telephone; alternatively, they may want to develop a new intervention that is to be delivered for the first time via Facebook groups. A review of the intervention literature is a necessary first step to determine if evidence-based interventions for the target health issue exist, and if so, building off of this literature is recommended. For example, if an investigator is interested in helping a specific population segment with depression, a review of the literature will reveal that many depression treatments have been tested using a variety of modalities . The first step then would be to use protocols for existing interventions as a starting point and if cultural or other tailoring is necessary, doing the proper developmental work to finalize the intervention content so that it can be converted into a format for Facebook delivery. If an intervention has already been delivered via a Facebook group in other studies, as in the case of lifestyle interventions , the investigator should then design an intervention protocol that improves upon weaknesses identified in previous studies. If no behavioral interventions exist for the target health issue, the interventionist should take the proper steps to conduct behavioral intervention development, which have been described elsewhere . If the investigator does not have expertise in behavioral intervention development, partnering with a content expert is highly recommended for a first attempt at developing a behavioral intervention. A proper behavioral intervention protocol includes weekly modules that contain learning objectives and content to be covered for each week of the intervention (eg, behavioral strategies, discussion topics, and homework), as these will be the foundation by which to produce a content library of Facebook posts. Traditional behavioral interventions have intervention protocols, but when developing a new intervention, drafting an intervention protocol is recommended before attempting to create Facebook posts because it should be used to guide the development of intervention posts . We will use the 16-module DPP Lifestyle Intervention as an example for converting a behavioral intervention into Facebook posts. Each module of the DPP has learning objectives and facilitator and participant materials meant to be delivered in a 60- to 90-minute session. With this foundation, the intervention protocol can be converted into any number of modalities including a Facebook group. We converted each of the 16 DPP modules into 1 week of posts (14 posts) that are meant to be distributed 2 per day, 1 in the morning and 1 in the evening. This resulted in a library of 224 posts. In , we provide examples of how learning objectives were translated into Facebook posts. Once each learning objective is reflected in a Facebook post or posts, we recommend having a second investigator who has experience with the intervention to independently review the posts to verify that the learning objectives are adequately met to ensure that the Facebook posts have high treatment fidelity. Like many behavioral interventions, each DPP module includes recurring behavioral strategies (ie, weigh-ins, goal-setting, check-ins on the previous week’s goals, and problem-solving). We designed 4 posts that occur at the same time every week for these recurring behavioral strategies so that participants are prompted to engage in them every week. One benefit of asynchronously delivered interventions is that the interventionist can select optimal days for posts to occur depending on the nature of the post. We selected specific days and times for the recurring posts to appear each week to allow participants to engage in these strategies at times of the week when they would benefit the most. For example, the goal setting post appears every Monday morning so that participants receive diet and physical activity goals at the beginning of the week and have the entire week to work on them. The post that checks in on how participants did on their weekly goals appears each Sunday evening, the last day before they receive the following week’s goal. This allows participants to share how they did with their goals, obstacles that got in the way, and a plan for overcoming those obstacles in the following week to prepare them for a more successful week. The “weigh-in” post appears each Friday morning (the last weekday of the week) so that participants are cognizant of their weight before the weekend begins that may offset weekend overconsumption, which is common . The problem-solving post appears on Wednesday mornings, which is a point at which participants have had a couple of days to work on their weekly goals and may be encountering obstacles that if solved before the week is over could set them up for a successful week. The regularity of these posts provides participant with a predictable structure for the intervention, which is important when intervention content is delivered asynchronously and in such small segments relative to traditional formats where the entire module would be delivered via a 60- to 90-minute synchronous discussion. The remaining 10 posts of the week are designed to tackle the learning objectives of each module while leveraging content in the protocol for that module. The theme of the week is announced in the first post of the week to make participants aware of the discussion topic for that week. Health communication interventions typically involve a set of messages that are developed based on a conceptual model and meant to be delivered repeatedly over some period. For example, a health communication intervention based on Prospect Theory might set out to compare gain- versus loss-framed messages about a health behavior . In one study, this was accomplished by randomizing participants to Facebook groups that used gain- or loss-framed messages to improve physical activity motivation . Alternatively, a health communication intervention based on Transportation Theory might set out to evaluate the effectiveness of narrative-based messaging about a health behavior . Social marketing principles, which address how to influence a target audience by developing messaging that resonates with their values, can also provide a guiding framework . Specifically, this framework outlines steps, including identifying a target audience and its unique barriers to the target behavior, gathering data on the target audience’s values, developing messaging that resonates with the audience’s values, and disseminating messaging using channels used by a high proportion of the audience. These are just a few of the many conceptual models that have been used in health communication interventions . Once an investigator has identified a conceptual model, the next step is to begin drafting a library of messages that reflect the appropriate theoretical constructs. As recommended earlier, the entire feed in a health communication intervention should not be exclusively focused on the target health behavior because people who are not motivated to learn about that behavior may be unlikely to join. For example, a Facebook group on vaccines would not attract the interest of people who are vaccine hesitant, in which case 2 problems are likely to occur. First, recruitment may be slow and difficult to accomplish. Second, the recruited sample is unlikely to be representative of the target population because people who volunteer to participate in a group solely focused on vaccines will naturally have more interest in vaccines. This increases the risk of ceiling effects that reduce the power to detect an intervention effect. Preliminary work that queries a representative sample of the target population on topics of interest is an approach that has been successful in previous work . A broad topic that encompasses the target behavior will allow messaging about the target behavior to fit into the content feed while also not overpowering it. Once a topic is identified, the investigator will need to develop messaging on that topic. In our previous studies, 15% of messages were on the target behavior and 85% of messages were on the broader topic and this was sufficient to produce an intervention effect . The investigators may or may not use theoretically-based messaging strategies in posts relating to the broad topic but doing so gives those messages scientific value, makes the feed consistent, and may provide interesting data for secondary analyses. Text The typical intervention post includes text and some form of media (eg, image, video, and link) or polls. While Facebook has a generous character limit of 63,206 characters for posts, social media marketing experts recommend keeping the text at approximately 40 to 70 characters because shorter posts tend to get more views by being easier to read and comprehend when users are scrolling, especially on a mobile device . Posts that exceed 2 lines of text will require a user to click to “read more” if they are viewing on a mobile device and those that exceed 5 lines of text will require a user to click to “read more” if they are viewing it on a computer. Pithy posts that do not exceed these limits are ideal because if the initial text does not draw the user’s interest enough for them to click “read more,” the opportunity to engage them in the intervention content captured in that post will be missed. Another important consideration in drafting the text of an intervention post is how to engage participants with the intervention content in that post and the group itself. Human-delivered behavioral interventions are interactive, which entails exchanges not only between counselor and group members but also between group members. Group cohesion is an essential ingredient in group-delivered interventions and refers to the sense of belonging, interpersonal support, and acceptance that is generated from group interactions . In the group therapy literature, group cohesion is associated with better attendance, greater interpersonal support, and better outcomes . The ideal intervention posts will engage group members in a behavioral strategy and facilitate group cohesion. In Facebook-delivered interventions, the investigator should design posts that start conversations in ways that closely emulate how the intervention would be delivered offline. Posts should have a “call to action” such that participants are asked to share something in the comments, thus creating a “discussion thread” on a topic. The call to action might involve a question (eg, “what is the hardest part about exercising on the weekends?”), brainstorming (eg, “what are some ways to avoid nighttime snacking?”), soliciting experiences (eg, “how did you do on the fruit and vegetable goal this week?”), or soliciting opinions (eg, “what is your favorite healthy dessert?”). A major challenge in conducting asynchronous web-based groups is that they lack some interpersonal connection-building aspects of an in-person group such as nonverbal behavior (eg, eye contact), the ability to have a synchronous dialogue, and the ability to connect one’s story with their emotional and physical characteristics including their voice, facial expressions, and body language. For example, in an in-person group, a group member might tell a very moving story about overcoming a difficult challenge, and her facial expression, tear-filled eyes, and cracks in her voice are likely to stimulate emotional reactions and empathy among fellow group members. However, the same story typed into a comment on the internet without the nonverbal interpersonal experience may not generate as intense of an emotional reaction among group members, especially if the comment is viewed by a group member as they are casually scrolling through their Facebook feed without full attention to the content they are viewing. As such, providing as many opportunities as possible to engage group members, pull them into discussions, stimulate emotional connections, and cue them to share their experiences are all essential to building group cohesion in this format. provides examples of conversation starters. Images The image in a post is most often the first thing a Facebook user will see, even before reading the text. The image heavily influences the user’s decision to read the text. As such, images are an opportunity to draw group members into the post. Posts with images also tend to elicit more engagement , therefore it is highly recommended to include an image in a text-based post. The image should resonate with the message of the text such that once the group member is drawn in, they will find that the text complements the ideas and themes reflected in the image. Using an image that is attention grabbing but unrelated to the message of the text may confuse participants or result in their feeling baited into a discussion they were not expecting, which may discourage engagement not only on that post but also on future posts. Images can contain text, infographics, or pictures. If text is in the image file, it can include the call to action, but then the text of the post would not need to include a call to action as well. Having one call to action per post makes it easier for participants to understand what is being asked of them and how to respond. For example, if the post asks participants to both set a goal and do problem-solving, the discussion thread might include some participants doing one or the other or both, which means the discussion thread will have different discussions going on within it, rather than everyone focused on sharing the same thing and reacting to what each other has shared. Another consideration when posting images is ensuring they appear in high quality when posted, that the text is large enough to read on mobile devices, and that the images are of the optimal size for posting on Facebook. Creating a “test group” to review posts on both mobile and desktop before they are used in an intervention can help to identify edits that should be made to improve clarity and readability. Videos Videos can be an excellent tool for conveying complex information that cannot be captured in a text-based post and for the group to become better acquainted with the interventionist. A brief introductory video shared on the first day of the intervention where the interventionist introduces herself and gives a brief overview of the program can be useful in building a relationship with the group. The frequency of videos is at the interventionist’s discretion, but videos require participants to click on them to receive the content, which may result in fewer participants viewing that content. To increase the click rate, the text for the video and the video thumbnail should pique participants’ interest. The maximum video length allowed on Facebook is 240 minutes, however, this length of video would not likely be viewed by many participants. Facebook recommends keeping videos to 15 seconds to maximize the chances the viewer will watch them to the end . The first 3 seconds of the video should be designed to draw the participant’s interest enough for them to continue watching and is the ideal place to insert the most important part of the message. Including captions will also increase the likelihood that participants will view the video because captions allow participants to digest the content without audio, which is inclusive of people of all abilities and useful for a participant who is in a location where they cannot play audio. Videos do not need to be professionally produced as “home grown” videos shot on mobile phones are extremely common on social media and they are inexpensive to create. The use of teleprompter mobile apps while shooting a video can help the subject of the video appear more natural when communicating into the camera. Polls Polls tend to attract more engagement than other posts , which presents a unique opportunity to use them to solicit experiences and opinions from participants that they might not otherwise share via other post types. Polls can also be used to help participants with goal setting, testing their knowledge, increasing accountability, and problem-solving ( describes poll discussion starters). For example, polls may be used for problem-solving by designing them to solicit barriers to a behavior change. In our lifestyle intervention, we use weekly polls to solicit barriers to diet and physical activity. Each poll option will show the names of participants who selected it, which allows the interventionist to follow up in the comments and engage in problem-solving with participants who selected each option. Polls should be used as discussion starters because they may not contribute to an intervention effect if the participant simply selects an option and no further discussion ensues. Having the interventionist address what they learned about participants in the poll by creating comment threads that tag participants who selected different options is a way to use polls to engage participants in a behavioral strategy. Links Facebook posts can also include links to other content such as articles, websites, and other resources. A drawback to using links to deliver intervention content is that if the substance of the post is behind the link, only participants who click the link will receive it. If it is possible to simply put the key message directly into the text of the Facebook post, the proportion of participants who receive it will be higher. As such, links are best used as a resource for additional information or reading beyond what is included in the post rather than for the delivery of essential intervention content. Another drawback of links is that they drive participants out of the Facebook group and to another website and this may reduce the likelihood they will engage on the content in the group. For example, a participant may click on a link to an interesting article and then proceed to engage in the comment section of that article or continue browsing that website rather than the group. We recommend using links sparingly and instead creating a resource library that participants can access at any time. In our lifestyle intervention, we created a Pinterest page that includes links to recipes, meal plans, workout videos, and other resources for participants to use as a resource library . This prevents informational content from taking up too much space in the Facebook group because such content is not likely to start conversations. When participants ask for resources, the counselor can then reply with a link to the Pinterest page. The typical intervention post includes text and some form of media (eg, image, video, and link) or polls. While Facebook has a generous character limit of 63,206 characters for posts, social media marketing experts recommend keeping the text at approximately 40 to 70 characters because shorter posts tend to get more views by being easier to read and comprehend when users are scrolling, especially on a mobile device . Posts that exceed 2 lines of text will require a user to click to “read more” if they are viewing on a mobile device and those that exceed 5 lines of text will require a user to click to “read more” if they are viewing it on a computer. Pithy posts that do not exceed these limits are ideal because if the initial text does not draw the user’s interest enough for them to click “read more,” the opportunity to engage them in the intervention content captured in that post will be missed. Another important consideration in drafting the text of an intervention post is how to engage participants with the intervention content in that post and the group itself. Human-delivered behavioral interventions are interactive, which entails exchanges not only between counselor and group members but also between group members. Group cohesion is an essential ingredient in group-delivered interventions and refers to the sense of belonging, interpersonal support, and acceptance that is generated from group interactions . In the group therapy literature, group cohesion is associated with better attendance, greater interpersonal support, and better outcomes . The ideal intervention posts will engage group members in a behavioral strategy and facilitate group cohesion. In Facebook-delivered interventions, the investigator should design posts that start conversations in ways that closely emulate how the intervention would be delivered offline. Posts should have a “call to action” such that participants are asked to share something in the comments, thus creating a “discussion thread” on a topic. The call to action might involve a question (eg, “what is the hardest part about exercising on the weekends?”), brainstorming (eg, “what are some ways to avoid nighttime snacking?”), soliciting experiences (eg, “how did you do on the fruit and vegetable goal this week?”), or soliciting opinions (eg, “what is your favorite healthy dessert?”). A major challenge in conducting asynchronous web-based groups is that they lack some interpersonal connection-building aspects of an in-person group such as nonverbal behavior (eg, eye contact), the ability to have a synchronous dialogue, and the ability to connect one’s story with their emotional and physical characteristics including their voice, facial expressions, and body language. For example, in an in-person group, a group member might tell a very moving story about overcoming a difficult challenge, and her facial expression, tear-filled eyes, and cracks in her voice are likely to stimulate emotional reactions and empathy among fellow group members. However, the same story typed into a comment on the internet without the nonverbal interpersonal experience may not generate as intense of an emotional reaction among group members, especially if the comment is viewed by a group member as they are casually scrolling through their Facebook feed without full attention to the content they are viewing. As such, providing as many opportunities as possible to engage group members, pull them into discussions, stimulate emotional connections, and cue them to share their experiences are all essential to building group cohesion in this format. provides examples of conversation starters. The image in a post is most often the first thing a Facebook user will see, even before reading the text. The image heavily influences the user’s decision to read the text. As such, images are an opportunity to draw group members into the post. Posts with images also tend to elicit more engagement , therefore it is highly recommended to include an image in a text-based post. The image should resonate with the message of the text such that once the group member is drawn in, they will find that the text complements the ideas and themes reflected in the image. Using an image that is attention grabbing but unrelated to the message of the text may confuse participants or result in their feeling baited into a discussion they were not expecting, which may discourage engagement not only on that post but also on future posts. Images can contain text, infographics, or pictures. If text is in the image file, it can include the call to action, but then the text of the post would not need to include a call to action as well. Having one call to action per post makes it easier for participants to understand what is being asked of them and how to respond. For example, if the post asks participants to both set a goal and do problem-solving, the discussion thread might include some participants doing one or the other or both, which means the discussion thread will have different discussions going on within it, rather than everyone focused on sharing the same thing and reacting to what each other has shared. Another consideration when posting images is ensuring they appear in high quality when posted, that the text is large enough to read on mobile devices, and that the images are of the optimal size for posting on Facebook. Creating a “test group” to review posts on both mobile and desktop before they are used in an intervention can help to identify edits that should be made to improve clarity and readability. Videos can be an excellent tool for conveying complex information that cannot be captured in a text-based post and for the group to become better acquainted with the interventionist. A brief introductory video shared on the first day of the intervention where the interventionist introduces herself and gives a brief overview of the program can be useful in building a relationship with the group. The frequency of videos is at the interventionist’s discretion, but videos require participants to click on them to receive the content, which may result in fewer participants viewing that content. To increase the click rate, the text for the video and the video thumbnail should pique participants’ interest. The maximum video length allowed on Facebook is 240 minutes, however, this length of video would not likely be viewed by many participants. Facebook recommends keeping videos to 15 seconds to maximize the chances the viewer will watch them to the end . The first 3 seconds of the video should be designed to draw the participant’s interest enough for them to continue watching and is the ideal place to insert the most important part of the message. Including captions will also increase the likelihood that participants will view the video because captions allow participants to digest the content without audio, which is inclusive of people of all abilities and useful for a participant who is in a location where they cannot play audio. Videos do not need to be professionally produced as “home grown” videos shot on mobile phones are extremely common on social media and they are inexpensive to create. The use of teleprompter mobile apps while shooting a video can help the subject of the video appear more natural when communicating into the camera. Polls tend to attract more engagement than other posts , which presents a unique opportunity to use them to solicit experiences and opinions from participants that they might not otherwise share via other post types. Polls can also be used to help participants with goal setting, testing their knowledge, increasing accountability, and problem-solving ( describes poll discussion starters). For example, polls may be used for problem-solving by designing them to solicit barriers to a behavior change. In our lifestyle intervention, we use weekly polls to solicit barriers to diet and physical activity. Each poll option will show the names of participants who selected it, which allows the interventionist to follow up in the comments and engage in problem-solving with participants who selected each option. Polls should be used as discussion starters because they may not contribute to an intervention effect if the participant simply selects an option and no further discussion ensues. Having the interventionist address what they learned about participants in the poll by creating comment threads that tag participants who selected different options is a way to use polls to engage participants in a behavioral strategy. Facebook posts can also include links to other content such as articles, websites, and other resources. A drawback to using links to deliver intervention content is that if the substance of the post is behind the link, only participants who click the link will receive it. If it is possible to simply put the key message directly into the text of the Facebook post, the proportion of participants who receive it will be higher. As such, links are best used as a resource for additional information or reading beyond what is included in the post rather than for the delivery of essential intervention content. Another drawback of links is that they drive participants out of the Facebook group and to another website and this may reduce the likelihood they will engage on the content in the group. For example, a participant may click on a link to an interesting article and then proceed to engage in the comment section of that article or continue browsing that website rather than the group. We recommend using links sparingly and instead creating a resource library that participants can access at any time. In our lifestyle intervention, we created a Pinterest page that includes links to recipes, meal plans, workout videos, and other resources for participants to use as a resource library . This prevents informational content from taking up too much space in the Facebook group because such content is not likely to start conversations. When participants ask for resources, the counselor can then reply with a link to the Pinterest page. Purpose Branding and graphic design are essential elements in the building of behavioral interventions delivered through Facebook groups. These disciplines are not merely for esthetic considerations, they are instrumental to effective communication, comprehension, reinforcing the core values and messaging in an intervention, and enhancing engagement, and ultimately, the impact of the intervention. Graphic design involves leveraging visual elements and design principles to communicate ideas and concepts . A brand is “any distinctive feature like a name, term, design, or symbol that identifies goods or services,” according to the American Marketing Association . For example, yellow arches with a red background are highly recognizable around the world to be the McDonald’s brand. Brand identity is the mix of visual elements, including the program’s name, logo, symbols, typography, and color palettes that people recognize and associate with the brand. Brand identity can also be conveyed in other elements of the program, including recruitment advertisements and the program website. Just as commercial marketing uses branding to drive behavior change for revenue generation, branding can be useful to drive behavior change when designing content for Facebook-delivered interventions. Branding and graphic design help establish the identity and credibility of an intervention, and as a result, may enhance participant engagement by drawing participants’ attention to intervention content. Adding a graphic designer to your team is a best practice as their expertise ensures that the visual representation of the intervention is not only appealing but also strategically aligned with the conceptual model and behavior change strategies being used. This synergy is key in crafting a distinct presence in the digital realm, where recognizable elements such as a program’s name, logo, and design need to be consistent and impactful to capture the fleeting attention of users scrolling through their web-based newsfeeds. The process of bringing a program’s brand identity from concept to community requires that each visual element aligns with the overarching goals of the intervention. In terms of budgeting, in addition to the graphic designer’s fees, budgets should include licenses for collaborative tools such as Canva (Canva Inc) or Adobe Express (Adobe), which help streamline the design process and enable development, sharing, editing, reviewing, and storage of intervention content. If the project budget is lean, investigators can explore collaborations with fine arts or graphic design departments at their university to identify students who have proficiency in using design tools. Organizing Branding and Graphic Design: Key Elements The journey from conceptualizing a program’s brand identity to the final implementation of its content in a Facebook group requires continual collaboration between a graphic designer and investigator. Incorporating a graphic designer from the onset ensures that branding is not an afterthought but a foundational component of program development. Before beginning, take time to budget appropriately and establish timelines and regular communication channels, such as weekly meetings, to ensure progress and alignment with the intervention goals. The brand identity and design process involves these elements: Program name: determining the program name is the initial step in creating a brand identity. The name should be pithy, capture the essence of the program, and be memorable to the target audience. Jargon should be avoided, including the use of the term “intervention” given it conjures different meanings when used colloquially. Audience defined: understanding the audience is crucial to designing a brand that resonates with that audience. The demographic, psychosocial, and behavioral characteristics of the target audience should guide the visual and communication strategies of the brand. Logo design: the logo, also referred to as a brand mark, is the visual cornerstone of the brand identity. An effective logo should be distinctive and relevant to the health program’s mission. It should also be visually effective across various formats, including web, mobile, and branded promotional items (eg, mugs and magnets). Some universities require alignment with the university brand kit when creating logos and branded materials. Identify and communicate what standards your program and designer need to follow. Brand kit developed: a comprehensive brand kit includes fonts, color palettes, and graphic assets. These elements ensure consistency across all materials and platforms. A brand kit can also include templates to aid in future intervention content development. Finalize intervention content conversion: content conversion involves adapting the intervention materials into individual posts, including copy, a visual idea, a call to action, and a post type suitable for Facebook group nuanced delivery. The text content of each post should be finalized before branding and design. Decide on post type balance: a balance of post types (polls, images, videos, etc) is necessary to maintain user engagement and cater to different content consumption preferences. Overuse of any one type of post could result in participant fatigue for that post type. For example, although polls tend to get high engagement, the use of several polls in a week could result in the feed becoming monotonous, which could result in participants disengaging. Graphic designer designs posts: after receiving the converted intervention content, the graphic designer uses the brand kit to design the posts in a way that aligns with the brand identity. View posts on all device types: once posts are designed, they should be viewed on both desktop and mobile platforms to evaluate readability, compatibility, and accessibility. Posts that are difficult to read on any platform should be modified. This ensures that the content’s integrity is maintained across all platforms participants might use to view it. Focus group testing of posts: focus groups conducted before and after pilot studies can be used to evaluate posts in terms of clarity, comprehension, valence, and persuasiveness. Focus group participants can also share how likely they would be to engage with the post and how they would engage, which reveals whether the post is likely to elicit the type of engagement intended. Post revision: posts should be revised based on the feedback from testing and may require multiple iterative rounds of testing and revision. Content library and posting: software programs such as Canva or Adobe Express can be used to store and edit the entire collection of posts (ie, content library). Posts can then be scheduled for Facebook group posting directly from the content library or via Facebook. Data-driven modifications: once the intervention ends, post engagement data should be inspected to identify posts that are in the bottom quartile of engagement. These posts can then be compared with those in the top quartile of engagement to glean possible reasons for low engagement. This will help guide post modifications to improve engagement in the future. Collaborating With Graphic Designers Graphic designers are unlikely to have deep knowledge of the intervention topic, thus intervention content development must occur via close collaboration between the investigator (content expert) and the graphic designer. The investigator is responsible for ensuring that the conceptual model, behavioral strategies, and intervention learning objectives are expressed in posts, while the graphic designer is responsible for ensuring the visual elements align with those features and resonate with the target audience. The graphic designer should be included in team meetings to ensure they are knowledgeable about not only the intervention but the study itself. Branding and graphic design are essential elements in the building of behavioral interventions delivered through Facebook groups. These disciplines are not merely for esthetic considerations, they are instrumental to effective communication, comprehension, reinforcing the core values and messaging in an intervention, and enhancing engagement, and ultimately, the impact of the intervention. Graphic design involves leveraging visual elements and design principles to communicate ideas and concepts . A brand is “any distinctive feature like a name, term, design, or symbol that identifies goods or services,” according to the American Marketing Association . For example, yellow arches with a red background are highly recognizable around the world to be the McDonald’s brand. Brand identity is the mix of visual elements, including the program’s name, logo, symbols, typography, and color palettes that people recognize and associate with the brand. Brand identity can also be conveyed in other elements of the program, including recruitment advertisements and the program website. Just as commercial marketing uses branding to drive behavior change for revenue generation, branding can be useful to drive behavior change when designing content for Facebook-delivered interventions. Branding and graphic design help establish the identity and credibility of an intervention, and as a result, may enhance participant engagement by drawing participants’ attention to intervention content. Adding a graphic designer to your team is a best practice as their expertise ensures that the visual representation of the intervention is not only appealing but also strategically aligned with the conceptual model and behavior change strategies being used. This synergy is key in crafting a distinct presence in the digital realm, where recognizable elements such as a program’s name, logo, and design need to be consistent and impactful to capture the fleeting attention of users scrolling through their web-based newsfeeds. The process of bringing a program’s brand identity from concept to community requires that each visual element aligns with the overarching goals of the intervention. In terms of budgeting, in addition to the graphic designer’s fees, budgets should include licenses for collaborative tools such as Canva (Canva Inc) or Adobe Express (Adobe), which help streamline the design process and enable development, sharing, editing, reviewing, and storage of intervention content. If the project budget is lean, investigators can explore collaborations with fine arts or graphic design departments at their university to identify students who have proficiency in using design tools. The journey from conceptualizing a program’s brand identity to the final implementation of its content in a Facebook group requires continual collaboration between a graphic designer and investigator. Incorporating a graphic designer from the onset ensures that branding is not an afterthought but a foundational component of program development. Before beginning, take time to budget appropriately and establish timelines and regular communication channels, such as weekly meetings, to ensure progress and alignment with the intervention goals. The brand identity and design process involves these elements: Program name: determining the program name is the initial step in creating a brand identity. The name should be pithy, capture the essence of the program, and be memorable to the target audience. Jargon should be avoided, including the use of the term “intervention” given it conjures different meanings when used colloquially. Audience defined: understanding the audience is crucial to designing a brand that resonates with that audience. The demographic, psychosocial, and behavioral characteristics of the target audience should guide the visual and communication strategies of the brand. Logo design: the logo, also referred to as a brand mark, is the visual cornerstone of the brand identity. An effective logo should be distinctive and relevant to the health program’s mission. It should also be visually effective across various formats, including web, mobile, and branded promotional items (eg, mugs and magnets). Some universities require alignment with the university brand kit when creating logos and branded materials. Identify and communicate what standards your program and designer need to follow. Brand kit developed: a comprehensive brand kit includes fonts, color palettes, and graphic assets. These elements ensure consistency across all materials and platforms. A brand kit can also include templates to aid in future intervention content development. Finalize intervention content conversion: content conversion involves adapting the intervention materials into individual posts, including copy, a visual idea, a call to action, and a post type suitable for Facebook group nuanced delivery. The text content of each post should be finalized before branding and design. Decide on post type balance: a balance of post types (polls, images, videos, etc) is necessary to maintain user engagement and cater to different content consumption preferences. Overuse of any one type of post could result in participant fatigue for that post type. For example, although polls tend to get high engagement, the use of several polls in a week could result in the feed becoming monotonous, which could result in participants disengaging. Graphic designer designs posts: after receiving the converted intervention content, the graphic designer uses the brand kit to design the posts in a way that aligns with the brand identity. View posts on all device types: once posts are designed, they should be viewed on both desktop and mobile platforms to evaluate readability, compatibility, and accessibility. Posts that are difficult to read on any platform should be modified. This ensures that the content’s integrity is maintained across all platforms participants might use to view it. Focus group testing of posts: focus groups conducted before and after pilot studies can be used to evaluate posts in terms of clarity, comprehension, valence, and persuasiveness. Focus group participants can also share how likely they would be to engage with the post and how they would engage, which reveals whether the post is likely to elicit the type of engagement intended. Post revision: posts should be revised based on the feedback from testing and may require multiple iterative rounds of testing and revision. Content library and posting: software programs such as Canva or Adobe Express can be used to store and edit the entire collection of posts (ie, content library). Posts can then be scheduled for Facebook group posting directly from the content library or via Facebook. Data-driven modifications: once the intervention ends, post engagement data should be inspected to identify posts that are in the bottom quartile of engagement. These posts can then be compared with those in the top quartile of engagement to glean possible reasons for low engagement. This will help guide post modifications to improve engagement in the future. Graphic designers are unlikely to have deep knowledge of the intervention topic, thus intervention content development must occur via close collaboration between the investigator (content expert) and the graphic designer. The investigator is responsible for ensuring that the conceptual model, behavioral strategies, and intervention learning objectives are expressed in posts, while the graphic designer is responsible for ensuring the visual elements align with those features and resonate with the target audience. The graphic designer should be included in team meetings to ensure they are knowledgeable about not only the intervention but the study itself. Because Facebook-delivered interventions are a unique experience relative to user-initiated Facebook groups, an onboarding protocol that helps participants know what to expect and how to engage can prepare them to be actively engaged participants and get the most out of the group. Onboarding can be done via a telephone call, webinar, or written materials. It is important to choose a modality that allows the research team to assess the participants’ understanding of the intervention and gives participants the opportunity to ask questions. We conduct 1-hour onboarding webinars before randomization to discuss Facebook privacy policies related to groups, the origin of the intervention, the intervention goals, the recurring weekly posts and how to respond to them, how to participate in the group including how to post and the option to post anonymously, how to earn engagement badges, and how we will extract data from the group and what we do with that data. During the webinar, participants are given the opportunity to ask questions and are also asked to share any anticipated barriers to participation. The use of onboarding webinars such as this has been shown to improve retention . The counseling protocol provides guidance for counselors about their role in an asynchronous web-based setting. Asynchronous web-based counseling, which we have referred to as “microcounseling” elsewhere , is different than in-person or synchronous counseling in that a written post starts the conversation and some group members respond and usually at different times, and other group members do not respond at all. If the counselor is too passive, participants who are initially engaged may exhibit declining engagement when they see they do not get meaningful responses, and participants who do not engage much initially may never engage very much throughout the intervention. A proactive counselor can be instrumental in setting the tone for a highly engaged group and is an important role model for engagement in the group. The counseling protocol includes a basic orientation to Facebook groups and their features; an orientation to the content library; instructions for how to engage on each post ; how to record brief videos; and scripts for brief videos, live chats, or live group meetings. A detailed protocol will be useful, particularly for counselors who do not have experience in this setting. Counselors with no experience in this setting should be trained in advance and shadow a more experienced counselor until they feel comfortable leading a group alone. Regular counselor supervision meetings can be useful to help them navigate challenging participant situations including disengaged participants. As in any behavioral intervention, some participants disengage over time and this puts them at risk for dropout, and ultimately, treatment failure. Numerous studies show that engagement in Facebook-delivered interventions is a predictor of treatment outcomes , so having a protocol to maximize participant engagement at the outset and to reengage participants who exhibit declining engagement may enhance retention and outcomes. Engagement is a key variable in the algorithm Facebook uses to determine the order in which content appears in an individual user’s newsfeed . Facebook’s algorithm is proprietary, but generally it “scores” content by how much it predicts the user will enjoy that content, and that score is based on the user’s past engagement with content from that source and how popular the content is among other users. It then arranges the user’s feed in such a way that the content that is prioritized is the content the algorithm predicts the user is most likely to enjoy . Because the degree of engagement in a Facebook group overall is a factor in the algorithm, it is important that intervention content attracts participant engagement. A poorly engaged group may result in declining post views (and thus engagement) by all group members simply because content from the group will not be prioritized by the algorithm . The objective of an engagement protocol is to provide a structured plan to maximize participant engagement. During onboarding, participants should be instructed to add the group to their “Favorites” list because Facebook’s algorithm prioritizes content that users put on their “Favorites” list . Participants can accomplish this by clicking on their account icon, going to Settings and Privacy, selecting Feed, and then choosing Favorites. They can then find the group on the list of items they follow and add it to their Favorites. Users can add up to 30 items to Favorites. During the intervention, investigators should track engagement data weekly to identify participants who are becoming disengaged. Facebook group insights can be used to identify participants who have not engaged in the past week. which is a point at which reengagement attempts may be indicated. Once disengaged participants are identified, the counselor can attempt to reengage them in a few ways. First, they can “tag” a participant in a reply to a post, asking them for their thoughts on the post. This simply entails typing the participants’ name into the post, which then sends them a notification that they were tagged; however, if the participant turns off notifications from the group, they will not be notified that they were tagged in a post. Limiting the number of participants tagged in one comment may be prudent because tagging many participants in a single comment on a post could result in a diffusion of responsibility to reply and therefore, be counterproductive to increasing engagement. Tagging should be used sparingly and judiciously to avoid a participant feeling singled out after being tagged repeatedly. If a participant does not respond to being tagged, the counselor can send a private Facebook message to the participant to attempt to reengage them. If the participant is not logging into Facebook often, they may not receive private messages in which case the counselor could send an SMS text message, email, or call them on the phone. Another problem that can be addressed in the engagement plan is what to do when specific intervention posts garner little or no engagement. If a particular post is not getting much engagement in the first 12 hours of posting, the counselor can tag the entire group in a comment by starting the reply with “@everyone” which will send a notification to all group members that they have been tagged in a comment. Once the intervention ends, the investigator should sum engagement data (eg, likes and comments) for each post and identify posts that are in the bottom quartile for engagement. Comparing these posts to those in the top quartile may elucidate characteristics of posts that elicited very little engagement and can be useful to guide intervention content refinements. For example, low-engagement posts might have a missing, cryptic, or confusing call to action, or the call to action asked group members to share something many people might not be comfortable sharing (eg, “Share a time when you ate far more than you planned”). Other factors to look for in low-engagement posts include high character counts, long duration videos, use of links that may have driven participants away from the group, or images with hard-to-read fonts. Another reason for low engagement might be that the content of the post was not inclusive to all group members. For example, the post may have only resonated with group members of certain racial and ethnic backgrounds, sexual orientations, body sizes, genders, ages, skin tones, or life circumstances (eg, marital status). Investigators can also inspect the inclusivity of discussion threads for each post by examining who participated in the thread and who did not. Postintervention focus groups that query participants on how inclusive the feed felt to them can provide valuable information for refining intervention posts. Some investigators, particularly those who are using a set of intervention content for the first time, might discover that a high proportion of posts received very little engagement. This may occur for a variety of reasons. One reason may be that many participants in the sample are not active on Facebook. In addition to low engagement, another sign that participants are not active on Facebook is low view counts on intervention posts. For each post, Facebook shows data on how many group members viewed the post. If only a small fraction of participants viewed a large proportion of intervention posts, this raises the possibility that the sample may include too many people who are not using Facebook often. This can be remedied in future studies with inclusion criteria that require participants to log into Facebook regularly (ie, several times per week). Facebook-delivered interventions are best matched for regular users of Facebook because they are already in the habit of logging into Facebook and reading their newsfeeds. When recruiting participants who are not regular users, the investigator has the additional task of cuing participants to log in. This can be accomplished by having them set up email notifications when a new post is made in the group or scheduling a daily reminder to visit the group. Low engagement on numerous posts could also be due to the factors relating to post quality as discussed previously. For example, if a large percentage of posts lack a call to action or have very high character counts, this may inadvertently discourage participant engagement as they may lose interest or feel the feed is too cognitively taxing. We recommend that investigators conduct single-arm proof-of-concept or pilot studies of a brief version of the intervention to pretest intervention content, which can flag low engagement posts so they can be revised before conducting a fully powered randomized trial. A Facebook-delivered intervention can yield thousands of reactions, comments, and poll votes from participants, which provides a wealth of opportunities to understand how participants interacted with the intervention content, the counselor, and each other. Although Facebook provides some group engagement insights, unfortunately, it does not provide summaries of participant-level engagement data. In January 2024, Meta announced that the Facebook Groups application programming interface will be no longer supported after March 2024 . This means that software tools to extract engagement data from private groups (eg, Grytics) are no longer functional. The process of extracting engagement data from the group and converting it into a data set containing participant-level engagement in a wide data set format (ie, each participant occupies their row, and each variable occupies a single column) is as follows. First, the investigator should indicate in the consent form that engagement data will be extracted from the group to be sure that participants understand that their engagement in the group is data under study. An important consideration for engagement data extraction is timing because although 75% of engagement on a post occurs within the first 5 to 6 hours of it being posted , additional engagement may occur over the next few days as some participants may not visit the group every day. For this reason, data extraction for a given post should not occur until at least a week has passed. If a participant leaves the group prematurely, their engagement will remain, however, their “views” data will disappear on all posts upon their exit. Thus, extracting engagement data before the intervention ends, which is the point when participants are most likely to exit the group, will preserve “views” data. Engagement data can be manually extracted by a research assistant into a secure web application (eg, REDCap [Research Electronic Data Capture; Vanderbilt University]). First, the unique ID numbers and the content of the following items should be extracted: (1) the Facebook post or poll, (2) comments, and (3) replies to comments. Then, the author ID and name, time stamp, text, attachment (link, image, and video), the ID of the participants who made reactions and the type of reactions, the ID of the participants who viewed the content, and direct URLs to the Facebook posts and comments should be extracted. Trials may have hundreds of posts and thousands of comments and reactions depending on intervention length and number of participants, thus proper budgeting for this labor is important as data extraction can be a time-consuming task. For quality control, a second research assistant should review engagement data from a randomly selected 5% of intervention posts and record and correct errors. If the error rate is >10%, then the research assistant who did the extraction should be retrained and the remainder of their intervention posts should be rereviewed for errors so any can be corrected. Manually extracted Facebook engagement data should be imported into a statistical software program (eg, SPSS, SAS, Stata, and R) where the investigator should assign week numbers to each post, comment, reply, and reaction based on the time stamp variable. This allows weekly engagement to be computed and for an examination of engagement trends over time. The investigator should also assign variable names and convert the format of the date and time to be compatible with the database program. The final Facebook engagement data set should be aggregated to the participant level such that each participant ID occupies its row and each engagement variable occupies a single column. Data in this format is ready for analyses. Engagement data are often not normally distributed; thus, distributions should be explored to identify outliers and determine whether parametric or nonparametric analyses are appropriate. Descriptive statistics can be performed on each form of engagement separately (ie, posts, comments, reactions, and poll votes) and for total engagement (ie, the sum of all forms of engagement). Trend analyses and probability tests can be performed to understand trends in participants’ engagement over the course of the intervention and the posts and types of posts with the greatest engagement. Social network analyses can be used to map participants’ interactions with each other and the group leader and study how the frequency of these types of interactions are associated with outcomes . Qualitative content analysis and natural language processing can be performed to understand the nature of the participants’ conversations, and when combined with statistical models and machine learning–based algorithms they allow researchers to identify important contextual predictors of participant engagement . Facebook is an intervention modality that has the potential for reach and scalability, but investigators must chart the implementation path at the earliest stages of development. Although many pilot studies exist across a wide range of topics , few fully powered trials of Facebook-delivered interventions have been conducted and no implementation or dissemination trials exist to our knowledge, thus this area of research is still in its infancy. Implementation of Facebook-delivered interventions in the real world could happen via partnerships with either on the web or offline entities. Developing web-based partnerships requires identifying web-based communities whose subject matter and content are compatible with the goals and values of the intervention. For example, an efficacious lifestyle intervention may have appeal in the ecosystem of diabetes-focused Facebook groups. Furthermore, an efficacious sun safety health communication intervention for parents of small children may be particularly appealing in parent-focused Facebook groups. In terms of partnerships in offline settings, insurers, clinics, and community health settings may be interested in this low-cost alternative to expensive digital platforms. Engaging and partnering with stakeholders in the early stages of intervention development allows their input to shape the intervention, which will increase the likelihood that the intervention finds its way into a real-world setting. Facebook is the most popular social media site for adults , with more monthly users than any other app in the Google Play store or Apple App Store , which presents myriad opportunities to research ways it can be used to improve public health. Both behavioral and health communication interventions can be adapted for Facebook group delivery, and given the immense ecosystem of organically formed groups on Facebook, implementation pathways abound. For example, investigators who have established the efficacy of a Facebook group–delivered behavioral intervention for diabetes self-management could partner with administrators of large existing diabetes Facebook groups to conduct an implementation trial in which members are offered access to a sister group where the intervention is delivered. In terms of efficacious health communication interventions, an investigator who established the efficacy of a health communication intervention on childhood vaccination; for example, they could partner with administrators of large Facebook groups or pages for expecting or new parents. Successful intervention delivery via this modality requires knowledge of how a target audience uses Facebook and how to engage them with evidence-based content. Studies are needed to establish best practices for intervention content design in ways that maximize intervention receipt, participant engagement, and outcomes. For Facebook group–delivered interventions to achieve their potential for impact, scalability, and reach, research should be informed by the science of behavioral intervention development, health communication theory, social media marketing research, and implementation science.
Surfactin facilitates establishment of
f9465bb2-cc3d-4625-807d-9e23fdbbacc0
11833321
Microbiology[mh]
Microbes produce a plethora of small molecules with diverse activities, which are extensively exploited in modern society. Several of these natural products, often denoted as secondary or specialized metabolites (SMs), have been pivotal in contemporary medicine and biotechnological industries . They serve as frontline therapy against infectious diseases, therapeutics for cancer , food additives , or crop protection agents . Besides the long-standing tradition of industrial exploitation, SMs are considered chemical mediators that modulate interactions within and between microbial species or even cross-kingdoms. For instance, defensive molecules might help producers defend their resources or niche from microbial competitors . Furthermore, some SMs function as signal molecules for coordinated growth (i.e. for quorum-sensing) and cell-differentiation . Among the diverse array of SM-producing microorganisms, the Bacillus subtilis species complex stands out as a prolific group with significant potential for SM production. This soil-dwelling bacterial species comprises several strains capable of synthesizing a wide range of SMs, including cyclic lipopeptides (LPs), polyketides, ribosomally synthesized and post-transcriptionally modified peptides, and signaling molecules . Specifically, LPs are the most extensively studied class. They are synthesized by non-ribosomal peptide synthase (NRPS), acting as a molecular assembly line that catalyzes the incorporation of amino acids into a growing peptide . In the B. subtilis species group, LPs are structurally categorized into three families: surfactins, iturins, and fengycins, based on their peptide core sequence. These molecules consist of seven (surfactins and iturins) or ten (fengycins) α-amino acids linked to β-amino (iturins) or β-hydroxyl (surfactin and fengycins) fatty acids . LPs exemplify multifunctional SMs, acting not only as antimicrobials by antagonizing other microorganisms but also playing pivotal roles in processes including motility, cellular differentiation, surface colonization, and signaling . Although significant progress has been made in understanding the mode of action, biosynthesis, regulation, and functionality of LPs, their natural functions in natural environments remain largely uncharacterized. Experimental studies addressing these questions are constrained by the immense biological and chemical diversity of soil microbiomes and the community-level interactions modulating SMs functions. Additionally, technical challenges in tracking and quantifying the in situ productions of LPs and other classes of SM pose further barriers to elucidating their natural role in soil . Most evidence supporting the multifaceted functions of LPs has been gathered under in vitro conditions using pure cultures. However, these controlled settings may not accurately reflect the complexity of soil environments and the actual dynamics of SMs production in a broader ecological context. To address this limitation, several studies have adopted the use of less complex systems that mimic natural biomes . One promising strategy is the use of synthetic bacterial communities (SynComs), which allow for the testing of fundamental ecological questions in controlled yet more ecologically relevant conditions . For instance, Cairns et al. used a 62-strain SynCom to demonstrate how low antibiotic concentration impacts community composition and horizontal transfer of resistance genes, whereas Niu et al. built a seven-member community mimicking the core microbiome of maize, which was able to protect the host from a plant-pathogenic fungus . Simultaneously, the development of soil-like matrices and artificial soil has provided a useful option for studying chemical ecology in highly controlled gnotobiotic systems compatible with analytical chemistry and microbiological methods . Thus, coupling the use of artificial soil systems and simplified SynCom is a fast-growing approach to examine microbial interactions whereas maintaining some degree of ecological complexity. This study aims to explore the roles of LPs produced by a B. subtilis isolate during SynCom assembly and simultaneously dissect the impact of LPs on B. subtilis establishment success within SynComs. Utilizing an artificial soil-mimicking system , we assessed the impact of non-ribosomal peptides and bacillaene (a hybrid NRPS – polyketide) (s fp ), as well as specifically surfactin ( srfAC ) or plipastatin ( ppsC ), on the ability of B. subtilis to establish within a four-member SynCom. We demonstrated that surfactin production facilitates B. subtilis establishment success within SynCom in a soil-mimicking environment. Regarding the SynCom assembly, we found that the wild-type and non-producer strains had a comparable influence on the SynCom composition over time. Moreover, we revealed that the B. subtilis and SynCom metabolome were both altered. Intriguingly, the importance of surfactin for the establishment of B. subtilis has been demonstrated in diverse SynCom systems with variable composition. Altogether, our work expands the knowledge about the role of surfactin production in microbial communities, suggesting a broad spectrum of action of this natural product. Bacterial strains and culture media All the strains used in this study are listed in . B. subtilis strains were routinely grown in lysogeny broth (LB) medium supplemented with the appropriated antibiotic (LB-Lennox, Carl Roth, Karlsruhe, Germany; 10 g/L tryptone, 5 g/L yeast extract, and 5 g/L NaCl) at 37°C with shaking at 220 rpm. The strains composing the different synthetic communities were grown in 0.5 × Trypticase Soy Broth (TSB; Sigma-Aldrich, St. Louis, Missouri, USA) for 24 h at 28°C with shaking at 220 rpm. Bacillus subtilis establishment in the Dyrehaven synthetic community propagated in soil-like matrix The impact of introducing B. subtilis P5_B1 and its secondary-metabolite-deficient mutants into the SynCom was investigated using an artificial soil-mimicking microcosm . Spherical beads were created by dripping a polymer solution, comprising 9.6 g/L of Phytagel™ and 2.4 g/L sodium alginate in distilled water, into a 2% CaCl2 cross-linker solution . After 2 h of soaking in 0.1× TSB as a nutrient solution, the beads were sieved to remove any residual medium. Twenty milliliters of beads were then transferred to 50 ml Falcon tubes. Cultures of B. subtilis P5_B1 and the four SynCom members were grown as described above. The members of the SynCom were mixed at different OD, whereas fast-growing strains (i.e. S. indicatrix and Chryseobacterium sp.) had to be included at low density to ensure SynCom stability. Specifically, Pedobacter sp. and Rhodococcus globerulus were adjusted to OD 2.0, whereas S. indicatrix and Chryseobacterium sp. were adjusted to OD 0.1 before mixing. Suspensions of B. subtilis P5_B1 and its mutants were standardized to OD 2.0. Next, bacterial inocula were prepared by mixing equal volumes of these adjusted cultures (four members plus each B. subtilis strain, respectively), and 2 ml of this suspension was then inoculated into freshly prepared beads. The bead microcosms were statically incubated at room temperature. Concurrently, microcosms inoculated with each strain as a monoculture were set as controls. At days 1, 3, 6, 9, 12, and 14, one gram of beads was transferred into a 15 ml Falcon tube, diluted in 0.9% NaCl, and vortexed for 10 min at maximum speed to disrupt the beads. The suspensions were then used for cell number estimation via colony-forming unit (CFU) and flow cytometry. For colony counting, 100 μL of the sample was serially diluted, spread onto 0.1× TSA, and CFU were estimated after 3 days. For the quantification of B. subtilis using flow cytometry, the samples were first passed through a Miracloth (Millipore) to remove any trace of beads and diluted 100-fold in 0.9% NaCl. Subsequently, 1 ml of each sample was transferred to an Eppendorf tube and assayed on a flow cytometer (MACSQuant VYB, Miltenyi Biotec). gfp -labeled B. subtilis was detected using the blue laser (488 nm) and filter B1 (525/50 nm). Cells above 1 cell/ml were detected. Controls with non-inoculated beads and 0.1× TSB were employed to identify background autofluorescence. Single events were gated into the GFP vs. SSC-A plot, where GFP-positive cells were identified for each sample. WT: srfAC complementation assay Overnight cultures of the strains of interest (OD600 = 2.0; WT::mKate and srfAC ::gfp) were premixed at 1:1 ratio. The inoculum was prepared by mixing equal volumes of the premixed Bacillus suspension with each member of the SynCom. Subsequently, 2 ml of this mixture were inoculated into freshly prepared beads. Propagation of the microcosms and B. subtilis quantification were performed as described above. Detection of secondary metabolites from artificial soil microcosms To extract secondary metabolites from the bead samples, 1 g of bead was transferred into a 15 ml with 4 ml of isopropyl alcohol:ethyl acetate (1:3 v/v), containing 1% formic acid. The tubes were sonicated for 60 min and centrifuged at 13400 rpm for 3 min. Then, the extracts were evaporated under N 2 overnight, re-suspended in 300 μL of methanol, and centrifuged at 13400 rpm. The supernatants were transferred to an HPLC vial and subjected to ultrahigh-performance liquid chromatography-high resolution mass spectrometry (UHPLC-HRMS) analysis. The running conditions and the subsequent data analysis were performed as previously described . Metatranscriptomic analysis For the RNA sequencing, the SynCom was propagated in the artificial soil matrix and challenged with either B. subtilis P5_B1 or the mutant impaired in NRP synthesis ( sfp mutant). A SynCom without B. subtilis inoculation served as the control group. On days 1 and 6, 4 g of beads from each treatment were snap-frozen in liquid nitrogen and stored at −80°C. The RNA extraction was performed using the RNeasy PowerSoil Total RNA Kit (QIAGEN) following the manufacturer’s instructions. After extraction, the samples were treated with the TURBODNA-free kit (ThermoFisher) to degrade the remaining DNA. The library preparation and sequencing were carried out by Novogene Europe on a NovaSeq 6000 S4 flow cell with PE150 (Illumina). The reads were demultiplexed by the sequencing facility. Subsequently, reads were trimmed using Trimmomatic v.0.39 . Quality assessment was performed using FASTQC, and reads were sorted with SortMeRNA v.4.2.0 to select only the non-rRNA reads for the downstream analysis. Reads were then mapped onto the genomes of the strains (D764, D763, D757, D749, and B. subtilis P5_B1) using Bowtie v.2–2.3.2 . Differential gene expression analysis was conducted using the R package DESeq2 using the shrunken log2 fold change values for analysis The P values of each gene were corrected using Benjamini and Hochberg’s approach for controlling the false discovery rate (FDR). A gene was considered as differentially expressed when absolute log2 fold change was greater than 2 and FDR was less than 0.05. For functional analysis, the protein-coding sequences were mapped with KEGG Ontology, Gene Ontology (GO) terms, and Clusters of Orthologous Genes (COGs) using eggNOG-mapper . Then, the eggNOG-mapper annotated dataset was used for gene set enrichment for pathway analysis in GAGE . Transcriptomic analysis was performed from three independent replicates for each sample. Inhibition assay The in vitro antagonistic effect of B. subtilis P5_B1 and its secondary metabolite-deficient mutants was assessed using double-layer agar plate inhibition assays against each SynCom member (target bacterium). All strains were cultured for 24 h in 0.1× TSB medium as described previously. The cultures underwent two washes with 0.9% NaCl followed by centrifugation at 10 000 rpm for 2 min, and OD 600 was adjusted to 0.1. For the first layer, 10 ml of 0.1× TSA (1.5% agar) were poured into petri dishes and allowed to dry for 30 min. Then, 100 μL of each target bacterium was added to 10 ml of 0.1× TSB containing 0.9% agar preheated to 45°C. This mixture was evenly spread on top of the 0.1× TSA and dried for an additional 30 min. Subsequently, 5 μL of each B. subtilis suspension was spotted on each plate. The plates were then incubated at room temperature, followed by examination of the inhibition zones on the lawn formed in the top layer. Similarly, we investigated the impact of exometabolites produced by SynCom members on the growth properties of B. subtilis strains . Spent media from SynCom cultures were collected after 48 h of growth in 0.1× TSB at 25°C and 250 rpm, filtered through 0.22 μm filters, and stored at 4°C. Growth curves were generated in 96-well microtiter plates. Each well contained 180 μL of 0.1× TSB supplemented with 5% spent media from each SynCom strain and 20 μL of either B. subtilis WT or its mutants. Control wells contained only 0.1× TSB medium without spent media supplementation. Cultivation was carried out in a Synergy XHT multi-mode reader at 25°C with linear continuous shaking (3 mm), monitoring optical density at 600 nm every 5 min. Competition assay Overnight cultures of the SynCom members and the gfp- labeled B. subtilis (WT; sfp and srfAC ) were pelleted (8000 rpm, 2 min) and resuspend in 0.1× TSB at an OD 600 of 0.1. Next, 200 μL of a SynCom member was inoculated in the first row of a 96-well microtiter plate. From there, the SynCom member was 10-fold diluted by transferring 20 μL of culture to the next row containing 180 μL of medium. This process was repeated for 6 dilution steps. Subsequently, 20 μL of the GFP-labelled B. subtilis variants was added to each well to establish the co-culture. Monocultures of both the SynCom member and B. subtilis variants served as controls to calculate competitiveness in co-culture. Cultivation was carried out in a Synergy XHT multi-mode reader (Biotek Instruments, Winooski, VT, US), at 25°C with linear continuous shaking (3 mm), monitoring the optical density and GFP fluorescence (Ex: 482/20; Em:528/20; Gain: 35) every 5 min. Kinetic parameters were estimated using the package GrowthCurver in R. Bacillus subtilis specialized metabolites induction by the synthetic community spent media The WT strain was inoculated in the presence of culture spent media from the SynCom members. The spent media were obtained after 48 h of growth in 0.1× TSB and filtered through at 0.22 μm. 10% of the spent media to Erlenmeyer flasks containing potato dextrose broth (15 ml in 100 ml flasks), followed by inoculation with an overnight culture of P5_B1 (OD 600 = 0.1). After 48 h of incubation at 25°C and 220 rpm, the cultures were centrifuged, filtered, and subjected to HPLC analysis for surfactin detection. Surfactin was detected already at 0.1 μg/ml using a purified standard. Assessment of Bacillus subtilis establishment in diverse synthetic communities To elucidate the role of surfactin in determining the establishment of B. subtilis within synthetic communities, we investigated whether P5_B1 can establish in various SynComs in a surfactin-dependent manner, using a methodology like the one described above for the competition assay. For this purpose, we selected five previously characterized bacterial SynComs, each with distinct compositions in terms of taxonomy and number of members, assembled for various objectives . In all cases, the SynCom members and the gfp -labeled B. subtilis strains (WT and srfAC ) were cultured overnight in 0.5× TSB. Following two washes with 0.9% NaCl, the ODs were adjusted to 0.1 in 0.1× TSB. The SynCom members were mixed in a 1:1 ratio and then inoculated and diluted in a 96-well plate. Subsequently, 20 μL of the gfp -labeled B. subtilis variants were added to each well to create the co-culture . Monocultures of both the SynCom member and B. subtilis variants were included as controls to determine competitiveness in the co-culture. Cultivation conditions and data analysis were conducted as described for the competition assay. Each experiment was performed with at least three independent replicates per treatment. Statistical analysis Data analysis and graphical representation were performed using R 4.1.0 and the package ggplot2 . Statistical differences in experiments with two groups were explored via Student’s t- tests. For multiple comparisons (more than two treatments), one-way analysis of variance (ANOVA) and Tukey’s honestly significant difference (HSD) were performed. In all the cases, normality and equal variance were assessed using the Shapiro–Wilks and Levene test, respectively. Statistical significance (α) was set at 0.05. Detailed statistical analysis description for each experiment is provided in figure legends. All the strains used in this study are listed in . B. subtilis strains were routinely grown in lysogeny broth (LB) medium supplemented with the appropriated antibiotic (LB-Lennox, Carl Roth, Karlsruhe, Germany; 10 g/L tryptone, 5 g/L yeast extract, and 5 g/L NaCl) at 37°C with shaking at 220 rpm. The strains composing the different synthetic communities were grown in 0.5 × Trypticase Soy Broth (TSB; Sigma-Aldrich, St. Louis, Missouri, USA) for 24 h at 28°C with shaking at 220 rpm. establishment in the Dyrehaven synthetic community propagated in soil-like matrix The impact of introducing B. subtilis P5_B1 and its secondary-metabolite-deficient mutants into the SynCom was investigated using an artificial soil-mimicking microcosm . Spherical beads were created by dripping a polymer solution, comprising 9.6 g/L of Phytagel™ and 2.4 g/L sodium alginate in distilled water, into a 2% CaCl2 cross-linker solution . After 2 h of soaking in 0.1× TSB as a nutrient solution, the beads were sieved to remove any residual medium. Twenty milliliters of beads were then transferred to 50 ml Falcon tubes. Cultures of B. subtilis P5_B1 and the four SynCom members were grown as described above. The members of the SynCom were mixed at different OD, whereas fast-growing strains (i.e. S. indicatrix and Chryseobacterium sp.) had to be included at low density to ensure SynCom stability. Specifically, Pedobacter sp. and Rhodococcus globerulus were adjusted to OD 2.0, whereas S. indicatrix and Chryseobacterium sp. were adjusted to OD 0.1 before mixing. Suspensions of B. subtilis P5_B1 and its mutants were standardized to OD 2.0. Next, bacterial inocula were prepared by mixing equal volumes of these adjusted cultures (four members plus each B. subtilis strain, respectively), and 2 ml of this suspension was then inoculated into freshly prepared beads. The bead microcosms were statically incubated at room temperature. Concurrently, microcosms inoculated with each strain as a monoculture were set as controls. At days 1, 3, 6, 9, 12, and 14, one gram of beads was transferred into a 15 ml Falcon tube, diluted in 0.9% NaCl, and vortexed for 10 min at maximum speed to disrupt the beads. The suspensions were then used for cell number estimation via colony-forming unit (CFU) and flow cytometry. For colony counting, 100 μL of the sample was serially diluted, spread onto 0.1× TSA, and CFU were estimated after 3 days. For the quantification of B. subtilis using flow cytometry, the samples were first passed through a Miracloth (Millipore) to remove any trace of beads and diluted 100-fold in 0.9% NaCl. Subsequently, 1 ml of each sample was transferred to an Eppendorf tube and assayed on a flow cytometer (MACSQuant VYB, Miltenyi Biotec). gfp -labeled B. subtilis was detected using the blue laser (488 nm) and filter B1 (525/50 nm). Cells above 1 cell/ml were detected. Controls with non-inoculated beads and 0.1× TSB were employed to identify background autofluorescence. Single events were gated into the GFP vs. SSC-A plot, where GFP-positive cells were identified for each sample. srfAC complementation assay Overnight cultures of the strains of interest (OD600 = 2.0; WT::mKate and srfAC ::gfp) were premixed at 1:1 ratio. The inoculum was prepared by mixing equal volumes of the premixed Bacillus suspension with each member of the SynCom. Subsequently, 2 ml of this mixture were inoculated into freshly prepared beads. Propagation of the microcosms and B. subtilis quantification were performed as described above. To extract secondary metabolites from the bead samples, 1 g of bead was transferred into a 15 ml with 4 ml of isopropyl alcohol:ethyl acetate (1:3 v/v), containing 1% formic acid. The tubes were sonicated for 60 min and centrifuged at 13400 rpm for 3 min. Then, the extracts were evaporated under N 2 overnight, re-suspended in 300 μL of methanol, and centrifuged at 13400 rpm. The supernatants were transferred to an HPLC vial and subjected to ultrahigh-performance liquid chromatography-high resolution mass spectrometry (UHPLC-HRMS) analysis. The running conditions and the subsequent data analysis were performed as previously described . For the RNA sequencing, the SynCom was propagated in the artificial soil matrix and challenged with either B. subtilis P5_B1 or the mutant impaired in NRP synthesis ( sfp mutant). A SynCom without B. subtilis inoculation served as the control group. On days 1 and 6, 4 g of beads from each treatment were snap-frozen in liquid nitrogen and stored at −80°C. The RNA extraction was performed using the RNeasy PowerSoil Total RNA Kit (QIAGEN) following the manufacturer’s instructions. After extraction, the samples were treated with the TURBODNA-free kit (ThermoFisher) to degrade the remaining DNA. The library preparation and sequencing were carried out by Novogene Europe on a NovaSeq 6000 S4 flow cell with PE150 (Illumina). The reads were demultiplexed by the sequencing facility. Subsequently, reads were trimmed using Trimmomatic v.0.39 . Quality assessment was performed using FASTQC, and reads were sorted with SortMeRNA v.4.2.0 to select only the non-rRNA reads for the downstream analysis. Reads were then mapped onto the genomes of the strains (D764, D763, D757, D749, and B. subtilis P5_B1) using Bowtie v.2–2.3.2 . Differential gene expression analysis was conducted using the R package DESeq2 using the shrunken log2 fold change values for analysis The P values of each gene were corrected using Benjamini and Hochberg’s approach for controlling the false discovery rate (FDR). A gene was considered as differentially expressed when absolute log2 fold change was greater than 2 and FDR was less than 0.05. For functional analysis, the protein-coding sequences were mapped with KEGG Ontology, Gene Ontology (GO) terms, and Clusters of Orthologous Genes (COGs) using eggNOG-mapper . Then, the eggNOG-mapper annotated dataset was used for gene set enrichment for pathway analysis in GAGE . Transcriptomic analysis was performed from three independent replicates for each sample. The in vitro antagonistic effect of B. subtilis P5_B1 and its secondary metabolite-deficient mutants was assessed using double-layer agar plate inhibition assays against each SynCom member (target bacterium). All strains were cultured for 24 h in 0.1× TSB medium as described previously. The cultures underwent two washes with 0.9% NaCl followed by centrifugation at 10 000 rpm for 2 min, and OD 600 was adjusted to 0.1. For the first layer, 10 ml of 0.1× TSA (1.5% agar) were poured into petri dishes and allowed to dry for 30 min. Then, 100 μL of each target bacterium was added to 10 ml of 0.1× TSB containing 0.9% agar preheated to 45°C. This mixture was evenly spread on top of the 0.1× TSA and dried for an additional 30 min. Subsequently, 5 μL of each B. subtilis suspension was spotted on each plate. The plates were then incubated at room temperature, followed by examination of the inhibition zones on the lawn formed in the top layer. Similarly, we investigated the impact of exometabolites produced by SynCom members on the growth properties of B. subtilis strains . Spent media from SynCom cultures were collected after 48 h of growth in 0.1× TSB at 25°C and 250 rpm, filtered through 0.22 μm filters, and stored at 4°C. Growth curves were generated in 96-well microtiter plates. Each well contained 180 μL of 0.1× TSB supplemented with 5% spent media from each SynCom strain and 20 μL of either B. subtilis WT or its mutants. Control wells contained only 0.1× TSB medium without spent media supplementation. Cultivation was carried out in a Synergy XHT multi-mode reader at 25°C with linear continuous shaking (3 mm), monitoring optical density at 600 nm every 5 min. Overnight cultures of the SynCom members and the gfp- labeled B. subtilis (WT; sfp and srfAC ) were pelleted (8000 rpm, 2 min) and resuspend in 0.1× TSB at an OD 600 of 0.1. Next, 200 μL of a SynCom member was inoculated in the first row of a 96-well microtiter plate. From there, the SynCom member was 10-fold diluted by transferring 20 μL of culture to the next row containing 180 μL of medium. This process was repeated for 6 dilution steps. Subsequently, 20 μL of the GFP-labelled B. subtilis variants was added to each well to establish the co-culture. Monocultures of both the SynCom member and B. subtilis variants served as controls to calculate competitiveness in co-culture. Cultivation was carried out in a Synergy XHT multi-mode reader (Biotek Instruments, Winooski, VT, US), at 25°C with linear continuous shaking (3 mm), monitoring the optical density and GFP fluorescence (Ex: 482/20; Em:528/20; Gain: 35) every 5 min. Kinetic parameters were estimated using the package GrowthCurver in R. specialized metabolites induction by the synthetic community spent media The WT strain was inoculated in the presence of culture spent media from the SynCom members. The spent media were obtained after 48 h of growth in 0.1× TSB and filtered through at 0.22 μm. 10% of the spent media to Erlenmeyer flasks containing potato dextrose broth (15 ml in 100 ml flasks), followed by inoculation with an overnight culture of P5_B1 (OD 600 = 0.1). After 48 h of incubation at 25°C and 220 rpm, the cultures were centrifuged, filtered, and subjected to HPLC analysis for surfactin detection. Surfactin was detected already at 0.1 μg/ml using a purified standard. Bacillus subtilis establishment in diverse synthetic communities To elucidate the role of surfactin in determining the establishment of B. subtilis within synthetic communities, we investigated whether P5_B1 can establish in various SynComs in a surfactin-dependent manner, using a methodology like the one described above for the competition assay. For this purpose, we selected five previously characterized bacterial SynComs, each with distinct compositions in terms of taxonomy and number of members, assembled for various objectives . In all cases, the SynCom members and the gfp -labeled B. subtilis strains (WT and srfAC ) were cultured overnight in 0.5× TSB. Following two washes with 0.9% NaCl, the ODs were adjusted to 0.1 in 0.1× TSB. The SynCom members were mixed in a 1:1 ratio and then inoculated and diluted in a 96-well plate. Subsequently, 20 μL of the gfp -labeled B. subtilis variants were added to each well to create the co-culture . Monocultures of both the SynCom member and B. subtilis variants were included as controls to determine competitiveness in the co-culture. Cultivation conditions and data analysis were conducted as described for the competition assay. Each experiment was performed with at least three independent replicates per treatment. Data analysis and graphical representation were performed using R 4.1.0 and the package ggplot2 . Statistical differences in experiments with two groups were explored via Student’s t- tests. For multiple comparisons (more than two treatments), one-way analysis of variance (ANOVA) and Tukey’s honestly significant difference (HSD) were performed. In all the cases, normality and equal variance were assessed using the Shapiro–Wilks and Levene test, respectively. Statistical significance (α) was set at 0.05. Detailed statistical analysis description for each experiment is provided in figure legends. Description of the artificial soil system inoculated with synthetic community To assess the role of B. subtilis SMs in shaping bacterial community assembly under soil-like conditions, we previously customized a hydrogel matrix that supports the axenic growth of multiple bacterial strains and enables the quantification of specific B. subtilis LPs (i.e. surfactin and plipastatin) . We subsequently assembled a four-membered bacterial SynCom obtained from the same sample site as B. subtilis P5_B1 . We selected these four isolates due to their shared origin with P5_B1, their stable co-existence in our hydrogel beads system, and their morphological distinctness, which allowed for straightforward quantification by plate count at detection limits around 10 2 CFU/g of beads. Although the relative abundance of each of the four strains fluctuated throughout the experiments, all four members were still detectable for up to three days of sampling . At the end of the experiment, we observed a clear strain co-existence pattern in the SynCom as previously reported: Stenotrophomonas indicatrix and Chryseobacterium sp. were the most dominant strains, R. globerulus was kept at low density whereas Pedobacter sp. was below our detection limit after day 3 . Using this established experimental system, we explored the role of LPs in the successful establishment of B. subtilis , as well as in SynCom assembly and functionality. A schematic diagram illustrating the core experimental design, and the scientific questions is presented in . Surfactin production facilitates Bacillus subtilis P5_B1 establishment in a four-member synthetic community To evaluate the contribution of specific LPs to P5_B1 establishment in the SynCom, we co-cultivated either the WT strain or the SM production-impaired mutants ( sfp , srfAC and ppsC ) in the presence of the SynCom using the hydrogel matrix that mimics soil characteristics . Initially, we confirmed that P5_B1 and its mutant derivatives grew and produced the expected LPs when cultivated axenically in the soil-like system. All B. subtilis strains colonized the hydrogel system at comparable rates (ANOVA at day 14, P = .87), demonstrating a similar population dynamic pattern: a one-log increase within a day followed by a plateau of nearly 1x10 7 CFU/g of the hydrogel after three days of cultivation, which was maintained up to the final sampling time on day 14 . When introduced to the SynCom, the WT and ppsC mutant (that produce surfactin but not plipastatin), successfully colonized the beads and maintained their population at approximately 1x10 7 CFU/g throughout the experiment, comparable to the titers obtained in axenic cultivation. In contrast, the population size of the B. subtilis genotypic variants impaired in non-ribosomal peptides ( sfp ) or solely in surfactin ( srfAC ) production sharply declined during the first six days. By the end of the experiment, the cell titers decreased to around three log-fold below the initial population levels (ANOVA, P < .01) . Following up on these observations, we investigated whether the WT strain could rescue the srfAC mutant by co-inoculating a mixture of both strains into the SynCom. In this co-culture, the WT strain remained more competitive than the srfAC mutant. However, the presence of the WT strain, and presumable its surfactin production capability, evidently rescued the srfAC mutant, as its decline was less pronounced compared to when introduced alone into the SynCom . Subsequently, we investigated the potential contribution of individual SynCom members to the decline of the surfactin-deficient strains using a pair-wise competition assay in planktonic cultures. Here, varying ratios of each SynCom member and B. subtilis were assessed and the reduction of the growth (i.e. area under the curve) relative to the monoculture was measured. B. subtilis populations experienced a significant reduction when co-cultured with S. indicatrix D763 and Chryseobacterium sp. D764 at the highest ratio (1, 0.1, 0.01 of the tested strain relative to the B. subtilis cultures), irrespective of B. subtilis capability to produce surfactin. However, in co-cultures where the SynCom members were diluted (more than 0.01 relative to B. subtilis ), B. subtilis strains lacking surfactin production were outcompeted by S. indicatrix D763 and Chryseobacterium sp. D764. Overall, B. subtilis WT showed greater competitiveness against these SynCom members, maintaining higher growth at higher dilution ratios compared to the sfp and srfAC mutants. In contrast, the less competitive strains in the bead systems, R. globerulus D757 and Pedobacter sp. D749, only impacted B. subtilis growth at the highest co-culture ratio, with strains lacking surfactin production exhibiting comparable growth to WT . Bacillus subtilis secondary metabolites do not have a major impact on synthetic community assembly Motivated by our observation that SM production, specifically surfactin, plays a crucial role in B. subtilis establishment success, we investigated if these SMs impact the SynCom composition over time. To do this, we evaluated the abundance of SynCom members (CFU) using NMDS and PERMANOVA . Regardless of the B. subtilis strain introduced, the SynCom followed similar assembly dynamics as we described above: S. indicatrix and Chryseobacterium sp. dominated the community whereas R. globerulus and Pedobacter sp. were less abundant ( and ). Estimation of the growth rates and the carrying capacity of each SynCom member in 0.1× TSB revealed that S. indicatrix , the most dominant strain, grew significantly faster and reached the highest cell density whereas Pedobacter sp. grew at the slowest rate . This could explain the observed SynCom composition on the hydrogel system, which was dominated by the fastest growers and more productive strains. A fixed-effect PERMANOVA using sampling time, B. subtilis variants and their interaction (how sampling time and B. subtilis variants jointly influence community composition) confirmed that the main driver of SynCom composition was the sampling time (PERMANOVA, R 2 = 0.49, P = .001), with a minor effect of B. subtilis strain introduced (PERMANOVA, R 2 = 0.06, P = .037) and the interaction (PERMANOVA, R 2 = 0.18, P = .005). Overall, the results suggested that introducing either the WT or its SM-impaired mutants did not have a major impact on the SynCom assembly, with the differences mainly explained by the sampling time ( and ). We investigated whether the antagonistic activity between the SynCom members and B. subtilis could explain our observations. Using an in vitro inhibition test, we found that the less competitive strains, Pedobacter sp. D749 and R. globerulus D757, were both susceptible to B. subtilis . Specifically, the antagonistic activity against Pedobacter sp. D749 was linked to NRP production, particularly surfactin, whereas R. globerulus was inhibited by all the variants. This suggests that other classes of SMs beyond NRP, produced by B. subtilis , may contribute to the inhibition of these two species. Nevertheless, the SynCom-abundant strains, S. indicatrix D763 and Chryseobacterium sp. D764, displayed no growth reduction by B. subtilis and its SMs, as evidenced by the absence of inhibition halos . Bacillus subtilis and synthetic community metabolome are both altered during the establishment experiments To explore the role of B. subtilis secondary metabolites in shaping the SynCom metabolome and how surfactin production was modulated in co-cultivation, we profiled both the SynCom and B. subtilis metabolome at day 14 of the experiment using liquid chromatography-mass spectrometry (LC–MS). A targeted approach revealed that the production of surfactin was significantly increased when the WT was grown in the presence of the SynCom compared with the WT production in axenic cultures ( t -test, P = .0317) . This finding was further validated in vitro by supplementing P5_B1 cultures with cell-free supernatants from each of the SynCom members or all strains together. Here, the spent media from both the monocultures and the SynCom induced surfactin production, with the highest increase observed when P5_B1 was supplemented with R. globerulus supernatant . Although most of the molecular features ( m/z ) detected in our system remained unidentified, the molecular network clearly shows the presence of the B. subtilis LPs, plipastatin, and surfactin, and their analogs. Moreover, the presence of ornithine lipids (OLs) was observed in the dataset . These metabolites are derived from Gram-negative bacterial cell outer membrane as surrogates of phospholipids under phosphate-limited conditions . The lipid abundances ( m/z between 597 and 671) increased in the SynCom alone, indicating this conversion of phospholipids to OLs occurs in the absence of B. subtilis. Ecologically, OLs have been linked to stress response . When surfactin producers (WT or ppsC mutant) were introduced into the system, the presence of OLs was strongly reduced. In contrast, with the sfp and srfAC mutants, OLs remained at levels comparable to the SynCom alone . We corroborated this observation by conducting an experiment with the SynCom in the presence of pure surfactin. Here, the same group of compounds ( m/z features) was altered in the surfactin-supplemented SynCom culture as in the presence of surfactin-producing B. subtilis co-cultures, although these were abundant in the control samples (i.e., without B. subtilis ) . Less competitive strains of the synthetic community were the most transcriptionally affected species by Bacillus subtilis specialized metabolites To dissect the mechanism of how surfactin facilitates B. subtilis establishment within the SynCom, a meta-transcriptomic approach was conducted comparing the transcriptional profile of the SynCom challenged with the WT and the sfp mutant. In total, 430 genes and 490 genes were differentially expressed (DEG) in the SynCom after 1 and 5 days, respectively, inoculated with the WT compared with the sample seeded with the sfp mutant. In both sampling days, the less competitive strains, Pedobacter sp. D749 and R. globerulus D757 had the highest number of differentially expressed genes (DGEs) in the system, accounting for around the 83% of DEGs at day 1 and 95% of those at the last sampling point . Subsequently, we explored the distribution of clusters of orthologous groups (COG categories) among the DEGs genes to discover which processes within the SynCom are potentially affected by the introduction of either the WT or sfp mutant. Here, many DEGs were not annotated or classified as COG S, an unknown function. However, cell wall/membrane/envelope biogenesis (COG M) and amino acid transport and metabolism (COG E) were the most abundant functional categories among the genes downregulated in the SynCom with WT strain added relative to the SynCom in the presence of sfp mutant . We explored the functions and enrichment pathways of DEGs for the less competitive strains ( Pedobacter sp. D749 and R. globerulus D757). The GO enrichment analysis revealed that both strains responded transcriptionally differently in the presence of the WT strains. Whereas the enriched biological processes in R. globerulus D757 were related to defense mechanisms or response to other organisms, upregulated processes in Pedobacter sp. were linked to amino acid transport, specifically histidine . Surfactin-facilitated establishment of Bacillus subtilis is conserved across diverse synthetic communities To survey if surfactin is important for establishment of B. subtilis P5_B1 within diverse microbial communities, we assessed the abundance of WT and surfactin-deficient mutant in five previously published and characterized SynComs . These SynComs varied in composition, reflecting different functionalities and ecological niches. Overall, the co-culture experiments revealed that the ability of B. subtilis to establish within the SynComs depended on surfactin production, SynCom composition (number of members), and the inoculation ratio. In most SynComs, except for the Kolter Lab’s SynCom which was broadly invaded, both the WT and the srfAC mutant displayed reduced growth at a high inoculation ratio of SynCom (10:1, 1:1, 1:10). However, the WT, which produces surfactin, generally reached higher population densities compared to the surfactin-deficient mutant across most SynComs. Although the difference between the WT and srfAC mutant was less pronounced in these shaken cultures compared with the tests performed on the alginate bead microcosm, this could be due to the lack of spatial structure present in the surface-attached communities or the differences in oxygen diffusion between the two experimental setups. When B. subtilis was inoculated at high ratios relative to the SynComs, the growth dynamics resembled those observed in axenic cultures of both the WT and srfAC mutant . To assess the role of B. subtilis SMs in shaping bacterial community assembly under soil-like conditions, we previously customized a hydrogel matrix that supports the axenic growth of multiple bacterial strains and enables the quantification of specific B. subtilis LPs (i.e. surfactin and plipastatin) . We subsequently assembled a four-membered bacterial SynCom obtained from the same sample site as B. subtilis P5_B1 . We selected these four isolates due to their shared origin with P5_B1, their stable co-existence in our hydrogel beads system, and their morphological distinctness, which allowed for straightforward quantification by plate count at detection limits around 10 2 CFU/g of beads. Although the relative abundance of each of the four strains fluctuated throughout the experiments, all four members were still detectable for up to three days of sampling . At the end of the experiment, we observed a clear strain co-existence pattern in the SynCom as previously reported: Stenotrophomonas indicatrix and Chryseobacterium sp. were the most dominant strains, R. globerulus was kept at low density whereas Pedobacter sp. was below our detection limit after day 3 . Using this established experimental system, we explored the role of LPs in the successful establishment of B. subtilis , as well as in SynCom assembly and functionality. A schematic diagram illustrating the core experimental design, and the scientific questions is presented in . Bacillus subtilis P5_B1 establishment in a four-member synthetic community To evaluate the contribution of specific LPs to P5_B1 establishment in the SynCom, we co-cultivated either the WT strain or the SM production-impaired mutants ( sfp , srfAC and ppsC ) in the presence of the SynCom using the hydrogel matrix that mimics soil characteristics . Initially, we confirmed that P5_B1 and its mutant derivatives grew and produced the expected LPs when cultivated axenically in the soil-like system. All B. subtilis strains colonized the hydrogel system at comparable rates (ANOVA at day 14, P = .87), demonstrating a similar population dynamic pattern: a one-log increase within a day followed by a plateau of nearly 1x10 7 CFU/g of the hydrogel after three days of cultivation, which was maintained up to the final sampling time on day 14 . When introduced to the SynCom, the WT and ppsC mutant (that produce surfactin but not plipastatin), successfully colonized the beads and maintained their population at approximately 1x10 7 CFU/g throughout the experiment, comparable to the titers obtained in axenic cultivation. In contrast, the population size of the B. subtilis genotypic variants impaired in non-ribosomal peptides ( sfp ) or solely in surfactin ( srfAC ) production sharply declined during the first six days. By the end of the experiment, the cell titers decreased to around three log-fold below the initial population levels (ANOVA, P < .01) . Following up on these observations, we investigated whether the WT strain could rescue the srfAC mutant by co-inoculating a mixture of both strains into the SynCom. In this co-culture, the WT strain remained more competitive than the srfAC mutant. However, the presence of the WT strain, and presumable its surfactin production capability, evidently rescued the srfAC mutant, as its decline was less pronounced compared to when introduced alone into the SynCom . Subsequently, we investigated the potential contribution of individual SynCom members to the decline of the surfactin-deficient strains using a pair-wise competition assay in planktonic cultures. Here, varying ratios of each SynCom member and B. subtilis were assessed and the reduction of the growth (i.e. area under the curve) relative to the monoculture was measured. B. subtilis populations experienced a significant reduction when co-cultured with S. indicatrix D763 and Chryseobacterium sp. D764 at the highest ratio (1, 0.1, 0.01 of the tested strain relative to the B. subtilis cultures), irrespective of B. subtilis capability to produce surfactin. However, in co-cultures where the SynCom members were diluted (more than 0.01 relative to B. subtilis ), B. subtilis strains lacking surfactin production were outcompeted by S. indicatrix D763 and Chryseobacterium sp. D764. Overall, B. subtilis WT showed greater competitiveness against these SynCom members, maintaining higher growth at higher dilution ratios compared to the sfp and srfAC mutants. In contrast, the less competitive strains in the bead systems, R. globerulus D757 and Pedobacter sp. D749, only impacted B. subtilis growth at the highest co-culture ratio, with strains lacking surfactin production exhibiting comparable growth to WT . secondary metabolites do not have a major impact on synthetic community assembly Motivated by our observation that SM production, specifically surfactin, plays a crucial role in B. subtilis establishment success, we investigated if these SMs impact the SynCom composition over time. To do this, we evaluated the abundance of SynCom members (CFU) using NMDS and PERMANOVA . Regardless of the B. subtilis strain introduced, the SynCom followed similar assembly dynamics as we described above: S. indicatrix and Chryseobacterium sp. dominated the community whereas R. globerulus and Pedobacter sp. were less abundant ( and ). Estimation of the growth rates and the carrying capacity of each SynCom member in 0.1× TSB revealed that S. indicatrix , the most dominant strain, grew significantly faster and reached the highest cell density whereas Pedobacter sp. grew at the slowest rate . This could explain the observed SynCom composition on the hydrogel system, which was dominated by the fastest growers and more productive strains. A fixed-effect PERMANOVA using sampling time, B. subtilis variants and their interaction (how sampling time and B. subtilis variants jointly influence community composition) confirmed that the main driver of SynCom composition was the sampling time (PERMANOVA, R 2 = 0.49, P = .001), with a minor effect of B. subtilis strain introduced (PERMANOVA, R 2 = 0.06, P = .037) and the interaction (PERMANOVA, R 2 = 0.18, P = .005). Overall, the results suggested that introducing either the WT or its SM-impaired mutants did not have a major impact on the SynCom assembly, with the differences mainly explained by the sampling time ( and ). We investigated whether the antagonistic activity between the SynCom members and B. subtilis could explain our observations. Using an in vitro inhibition test, we found that the less competitive strains, Pedobacter sp. D749 and R. globerulus D757, were both susceptible to B. subtilis . Specifically, the antagonistic activity against Pedobacter sp. D749 was linked to NRP production, particularly surfactin, whereas R. globerulus was inhibited by all the variants. This suggests that other classes of SMs beyond NRP, produced by B. subtilis , may contribute to the inhibition of these two species. Nevertheless, the SynCom-abundant strains, S. indicatrix D763 and Chryseobacterium sp. D764, displayed no growth reduction by B. subtilis and its SMs, as evidenced by the absence of inhibition halos . and synthetic community metabolome are both altered during the establishment experiments To explore the role of B. subtilis secondary metabolites in shaping the SynCom metabolome and how surfactin production was modulated in co-cultivation, we profiled both the SynCom and B. subtilis metabolome at day 14 of the experiment using liquid chromatography-mass spectrometry (LC–MS). A targeted approach revealed that the production of surfactin was significantly increased when the WT was grown in the presence of the SynCom compared with the WT production in axenic cultures ( t -test, P = .0317) . This finding was further validated in vitro by supplementing P5_B1 cultures with cell-free supernatants from each of the SynCom members or all strains together. Here, the spent media from both the monocultures and the SynCom induced surfactin production, with the highest increase observed when P5_B1 was supplemented with R. globerulus supernatant . Although most of the molecular features ( m/z ) detected in our system remained unidentified, the molecular network clearly shows the presence of the B. subtilis LPs, plipastatin, and surfactin, and their analogs. Moreover, the presence of ornithine lipids (OLs) was observed in the dataset . These metabolites are derived from Gram-negative bacterial cell outer membrane as surrogates of phospholipids under phosphate-limited conditions . The lipid abundances ( m/z between 597 and 671) increased in the SynCom alone, indicating this conversion of phospholipids to OLs occurs in the absence of B. subtilis. Ecologically, OLs have been linked to stress response . When surfactin producers (WT or ppsC mutant) were introduced into the system, the presence of OLs was strongly reduced. In contrast, with the sfp and srfAC mutants, OLs remained at levels comparable to the SynCom alone . We corroborated this observation by conducting an experiment with the SynCom in the presence of pure surfactin. Here, the same group of compounds ( m/z features) was altered in the surfactin-supplemented SynCom culture as in the presence of surfactin-producing B. subtilis co-cultures, although these were abundant in the control samples (i.e., without B. subtilis ) . Bacillus subtilis specialized metabolites To dissect the mechanism of how surfactin facilitates B. subtilis establishment within the SynCom, a meta-transcriptomic approach was conducted comparing the transcriptional profile of the SynCom challenged with the WT and the sfp mutant. In total, 430 genes and 490 genes were differentially expressed (DEG) in the SynCom after 1 and 5 days, respectively, inoculated with the WT compared with the sample seeded with the sfp mutant. In both sampling days, the less competitive strains, Pedobacter sp. D749 and R. globerulus D757 had the highest number of differentially expressed genes (DGEs) in the system, accounting for around the 83% of DEGs at day 1 and 95% of those at the last sampling point . Subsequently, we explored the distribution of clusters of orthologous groups (COG categories) among the DEGs genes to discover which processes within the SynCom are potentially affected by the introduction of either the WT or sfp mutant. Here, many DEGs were not annotated or classified as COG S, an unknown function. However, cell wall/membrane/envelope biogenesis (COG M) and amino acid transport and metabolism (COG E) were the most abundant functional categories among the genes downregulated in the SynCom with WT strain added relative to the SynCom in the presence of sfp mutant . We explored the functions and enrichment pathways of DEGs for the less competitive strains ( Pedobacter sp. D749 and R. globerulus D757). The GO enrichment analysis revealed that both strains responded transcriptionally differently in the presence of the WT strains. Whereas the enriched biological processes in R. globerulus D757 were related to defense mechanisms or response to other organisms, upregulated processes in Pedobacter sp. were linked to amino acid transport, specifically histidine . Bacillus subtilis is conserved across diverse synthetic communities To survey if surfactin is important for establishment of B. subtilis P5_B1 within diverse microbial communities, we assessed the abundance of WT and surfactin-deficient mutant in five previously published and characterized SynComs . These SynComs varied in composition, reflecting different functionalities and ecological niches. Overall, the co-culture experiments revealed that the ability of B. subtilis to establish within the SynComs depended on surfactin production, SynCom composition (number of members), and the inoculation ratio. In most SynComs, except for the Kolter Lab’s SynCom which was broadly invaded, both the WT and the srfAC mutant displayed reduced growth at a high inoculation ratio of SynCom (10:1, 1:1, 1:10). However, the WT, which produces surfactin, generally reached higher population densities compared to the surfactin-deficient mutant across most SynComs. Although the difference between the WT and srfAC mutant was less pronounced in these shaken cultures compared with the tests performed on the alginate bead microcosm, this could be due to the lack of spatial structure present in the surface-attached communities or the differences in oxygen diffusion between the two experimental setups. When B. subtilis was inoculated at high ratios relative to the SynComs, the growth dynamics resembled those observed in axenic cultures of both the WT and srfAC mutant . Secondary metabolites have traditionally been studied for their antimicrobial or anticancer properties. However, several of these natural products exert multifaceted functions, influencing the physiology of the producing microorganism and modulating interactions with other organisms . Understanding the role of these compounds in natural habitats ( e. g. in soil) is crucial for optimizing their use and biotechnological applications. However, this has been challenging due to the chemical and biological complexity and the limitations of quantifying SMs in situ . Therefore, this study aimed to elucidate the contribution of cyclic LPs, particularly surfactin and plipastatin, in the establishment and functional dynamics of both B. subtilis and SynCom members in a soil-mimicking environment. Our key findings demonstrate that surfactin production facilitates the establishment success of B. subtilis across multiple SynComs. Whereas surfactin was crucial for B. subtilis competitiveness, its production did not markedly alter the overall composition of the SynCom. Additionally, the metabolomic and transcriptomic analysis revealed that surfactin modulates both the producer and SynCom metabolic landscapes. Together, our results support past observations and the long-standing hypothesis, that bacteria lacking secondary metabolite production are less competitive than SM-producing wild-types . We experimentally demonstrated the contribution of surfactin in B. subtilis success when inoculated in the presence of a SynCom using a reductionist approach: four-member bacterial SynCom propagated in microcosms based on an artificial hydrogel matrix . One of the biggest methodological challenges in studying SM-driven microbial interactions is to mimic the environmental conditions. Consequently, the need for developing model systems of intermediate complexity for elucidating the ecological role of these molecules and shedding light on microbiome assembly-related questions has been widely stated . This is because classic axenic in vitro assays do not resemble crucial aspects of microbial niches, whereas natural samples are far too complex to dissect the underlying processes at the molecular level. Our SynCom is not intended to represent the natural sample site, i.e. Dyrehaven soil community, where all strains used in this study were isolated, but rather, it represents a reproducible, trackable, and easy-to-set bacterial assemblage useful for testing the role of SMs in SynCom assembly, and together with the soil-mimicking matrix, might help to overcome the bottlenecks imposed by soil complexity in terms of microbial diversity and SMs quantification. The described system aligns conceptually with recent approaches that used transparent microcosms mimicking the complexity of natural environments also allowing for testing hypotheses with statistical power in a controlled setup . Throughout the present work, we revealed the crucial role of surfactin in the establishment and persistence of B. subtilis within a set of diverse SynComs. Surfactin is by far one of the most-studied LPs and appears to confer a competitive advantage to B. subtilis under different conditions and environments. The relevance of this multifunctional SM has been demonstrated in biofilm formation , swarming and sliding motility , root and phyllosphere colonization , and triggering induced systemic resistance (ISR) in plants . Although it is not frequently highlighted as a primary function of surfactin, its contribution to the fitness of producers has been shown in different environmental conditions. For instance, Luo et al. demonstrated that a B. subtilis strain impaired in surfactin production did not colonize rice sheaths inoculated with Rhophitulus solani. At the same time, WT increased its population size over time . Similarly, Zeriouh and colleagues showed that srfAB mutant (of Bacillus amyloliquefaciens UMAF6614) presents reduced persistence in the melon phylloplane . In soil, similar observations were made where surfactin-impaired mutants of B. subtilis were unable to colonize Arabidopsis thaliana roots . In all these examples, the underlying mechanism links surfactin production with triggering Bacillus biofilm formation, surface spreading, and colonization. Even though further experiments are needed to fully understand how surfactin enhance B. subtilis establishment in the SynComs, we hypothesize that surfactin-mediated niche colonization (spreading and biofilm formation) and alterations of the SynCom chemical landscape might play important roles in the observed phenomenon. B. subtilis P5_B1 is a strong biofilm producer both in vitro and on plant roots in laboratory settings . We have shown here and previously that P5_B1 produces surfactin in the microcosms at levels that are presumably required for timing of biofilm formation (~15 μg/ g of beads) , which may aid its attachment to the hydrogel beads, creating niches where B. subtilis could minimize competition for resources with other SynCom members. Furthermore, the surfactin-induced modulation of the overall SynCom chemical landscape could lead to niche differentiation. By reshaping community chemodiversity, surfactin may help to create distinct ecological niches. This differentiation could be essential for reducing competition and allowing the coexistence of the surfactin-producing strain within the community. Alternatively, surfactin production could help B. subtilis to cope with a potential oxygen depletion induced by the SynCom growth. Such function of surfactin has been recently demonstrated where surfactin production mediated B. subtilis survival via membrane depolarization and increased oxygen diffusion under low oxygen concentration . We observe that the WT and the SM-mutant strains had hardly any influence on the composition and dynamics of the SynCom, but surfactin production altered the chemical diversity of the SynCom, besides the sensitivity of minor SynCom members to B. subtilis SMs. Several studies have highlighted that isolates of the B. subtilis species complex are not strong competitors of indigenous soil microbiota, and as a consequence, they did not shift the composition rhizosphere bacterial community to a considerable degree or mainly influenced specific groups of the rhizospheres’ microbial community . However, application of B. subtilis and its close-relative species in the rhizosphere improve plant health and resiliency, and SM production contributes to these properties. Beyond the impact of the examined LPs on B. subtilis growth dynamics and SynCom composition, we found that surfactin production was stimulated in the presence of the SynCom or specific SynCom members compared to B. subtilis monocultures. This observation supports the well-established notion that microbial interactions play a crucial role in modulating the production of bioactive secondary metabolites . Several studies have elegantly demonstrated the enhanced production of various natural products and their consequences for the producers (reviewed in ). For example, Andric et al. showed that Bacillus velezensis , a member of the B. subtilis complex, increases the production of bacillaene and surfactin upon sensing metabolic cues produced by Pseudomonas sessilinigenes CMR12a; leading to enhanced antibacterial activity by B. velezensis . The increased surfactin production observed under our experimental conditions likely provides benefits to B. subtilis during community-level interactions. Beyond its antagonistic activity, particularly against closely related species, surfactin production is linked to multiple beneficial Bacillus phenotypes, potentially serving as defensive responses upon detecting bacterial competitors. For instance, phenotypes such as increased biofilm formation , enhanced motility , induction of sporulation , and secondary metabolite production have been proposed as defensive mechanisms after sensing competitors . However, the underlying mechanisms regulating B. subtilis SM production in response to their neighbor’s activity remain largely unknown. The so-called “competition sensing” hypothesis provides an ecological framework, suggesting that microbes have evolved the ability to sense hazard signals coupled with a stress response that enables a “counterpunch” by upregulating the production of antibiotics and toxins . Similarly, the SynCom-secreted metabolome was modulated by the surfactin production. Here, we observed that primarily OLS lipids were downregulated when the SynCom was exposed to surfactin. In sum, soil bacteria are well known for their potential to synthesize a plethora of SMs with a wide diversity of activities. Our understanding of the ecological roles of these metabolites under natural conditions has just begun to be unlocked. Our observations, gathered in an intermediate ecological complex experimental system revealed the role of surfactin in the ecology of the producers and how this SM impacts the metabolism of its interacting partners. Thus, we hypothesize that the production of multimodal secondary metabolites by B. subtilis is a refined strategy that contributes to fitness and persistence in natural habitats where competition could be thorough. LozanoAndrade_ISMEJ_SupplementaryInformation_R2_wraf013 Table_S1_wraf013
Exome sequencing improves the molecular diagnostics of paediatric unexplained neurodevelopmental disorders
2eb44a7b-5f6b-4794-a38b-17e109468ec2
10845791
Pathology[mh]
Neurodevelopmental disorders are a genetically heterogeneous group of conditions affecting the normal development of the central nervous system (CNS), with an adverse prognosis for the quality of intellectual and social abilities, as well as daily functioning. With a reported prevalence of 1–2% of live births, they represent one of the most discussed current health and social issues . Their high heterogeneity is reflected in the genetic and phenotypic overlap of distinct disorders, making them difficult to differentiate clinically. The symptoms typically begin in childhood and persistently affect development. Intellectual disability of varying degrees, with isolated occurrence or accompanied by multiple congenital abnormalities affecting intellectual and somatic development, is widely reported as the most prominent clinical feature. The current guidelines for the genetic evaluation of individuals with NDDs and MCAs still recommend the chromosomal microarray analysis (CMA) as the first-tier molecular diagnostic test which overcomes the traditional karyotyping using G-banding . However, other diagnostic test as fragile X testing or metabolic tests may be conclusive in those cases with suggestive and prominent clinical symptoms . According to the current information obtained from sysID database, there are more than 1500 known and more than 1200 candidate genes of which rare variations can be responsible for the phenotypic manifestation of abnormal brain development and functioning . AutDB as an “autism information portal” summarizes the information about more than 1200 genes involved in the phenotypes of autism spectrum disorders (ASD) . The significant genetic overlap exists among neurodevelopmental and neuropsychiatric disorders due to shared signalling and developmental pathways . Trio-based ES involving affected individuals and their parents was recently proposed as the most effective molecular diagnostic approach for families with clinical features of Mendelian disorders including NDDs . Nowadays it is a rapidly evolving method for the simultaneous detection of sequence variants and copy-number variations (CNVs) . Therefore, the individuals with a family history of disease or with its solitary occurrence can avoid a low-yield or time-consuming diagnostic tests by undergoing this effective and powerful analysis. The rapid and accurate molecular diagnosis improves the short- and long-term disease management with reduced complications. The conclusive outputs of ES can specify the prognosis of disease and improve the quality of life regarding to the optimised and targeted, even symptomatic therapy. Moreover, the elucidation of the molecular basis of the abnormal phenotype can facilitate family-focused genetic counselling with reproductive outcomes . Therefore, it is not surprising that high-throughput genomic analyses as ES and GS are becoming preferable molecular diagnostic approaches in the genetic evaluation of individuals with NDDs and MCAs throughout the clinical laboratories and medical centres worldwide . However, the option of ES for the molecular diagnostics of unexplained NDDs and MCAs is mostly funded by research studies and grants so far in this country, therefore there are not any general recommendations or guidelines for ES as a standard genetic test. Instead, the phenotype-driven (virtual) gene panel “next generation” sequencing (NGS) encompassing the limited number of genes for specific entities, are widely offered, and covered by public health insurance based on the referral from the clinical geneticists. Their implementation to the molecular diagnostics of rare diseases reduces the turnaround time together with the maintenance of the comparable diagnostic yield as clinical ES . This study presents the results of trio-based ES in the group of 90 children with NDDs from 85 families, reaching a diagnostic yield of 48.9% (44/90) for pathogenic single-nucleotide variants, short insertions/deletions, and intragenic CNVs. Quality control parameters of ES Family-based ES was performed in 90 paediatric patients with NDDs and MCAs, their parents and/or in their unaffected siblings. Before variant prioritization and analysis, the QC metrics of processed sequencing outputs were calculated and inspected (Additional file ). Briefly, on average, more than 81 million unique reads per sample were mapped to the reference genome GRCh38/hg38 primary assembly. Approximately 98% of targeted bases were covered to at least 30X and median target coverage was calculated as 97X. The average proportion of flagged PCR duplicates was only 14% and the average uniformity reached 1.41 which is a good assumption for CNV analysis. Moreover, no considerable differences in QC metrics between index and pooled samples have been observed (Additional file ). The fraction of all target bases achieving 30X or greater coverage was calculated as 96% of all target bases in index cases (n = 18) and pooled samples (n = 14) as well. Diagnostic yield and in silico functional characterization The effective process of the variant prioritization and interpretation assessed the molecular diagnosis in 48.9% cases (44/90) of paediatric NDDs and associated MCAs. The causative SNVs and indels were identified in 45.6% (41/90) of cases whereas the intragenic CNVs were found in much lower proportion, 3.3% (3/90). The Gene Ontology (GO) analysis using the PANTHER™ Classification system with GO annotation was performed for the characterization of the gene set with causative variants. The set of 41 genes was categorized in five PANTHER™ Ontologies: Molecular Function (output for 10 categories), Biological Process (10 categories), Cellular Component (3 categories), Protein Class (12 categories), Pathway (45 categories) after manual curation. Almost half of genes (20/41) with causative variants are involved in binding and 34.1% (14/41) perform a catalytic activity on a molecular level. The disruption of cellular functioning is predicted due to the causative variants in 65.9% (27/41) of genes and 46.3% (19/41) of genes play a crucial role in metabolic processes. However, 63.4% (26/41) of genes remained uncategorized after the analysis of 177 curated, mostly signalling, pathways in the ontology PANTHER™ Pathway, indicating their broad structural and functional diversity as well as still uncharacterized involvement in the cell structure and functioning. Vice versa, 19.5% (8/41) genes were categorized in at least two pathways (Additional files ; , Sheet 1–5). The PANTHER™ statistical overrepresentation test in these five PANTHER™ Ontologies assessed the overrepresentation of the analysed 41 genes: Biological Process (overrepresentation in 60 categories), Molecular Function (33 categories), Cellular Component (8 categories), Protein Class (one category), Pathway (6 categories including category “unclassified”) and Reactome Pathways (2 categories) after manual curation (Additional file , Sheet 1–5). Pathogenic sequence variants More than half of causative variants (22/43) were of de novo origin (de novo variants observed in two pairs of monozygotic twins were counted once per one family). Approximately 40% (17/43) were of a familial origin whereas five variants were of paternal origin, including two of them confirmed as mosaic of ~ 11% ( CTNNB1 gene) and ~ 15% ( DYNC1H1 gene), respectively, in paternal DNA samples. The comparable degrees of mosaicism were found in the paternal samples of buccal swabs, ~ 13% for the CTNNB1 gene variant and ~ 10% for the DYNC1H1 gene variant, respectively. Other two variants ( GRIN2A and NFIB genes) were inherited from affected fathers and one variant ( CACNA1A gene) was inherited from apparently unaffected fathers. Two variants ( GCH1 and KCNC3 genes ) and four hemizygous X-linked variants ( EDA, OPHN1 , SLC16A2 and PTCHD1 genes) were inherited from asymptomatic mothers. The detailed analysis of genotype–phenotype correlation suggested four variants of maternal origin ( GABRB2, PBX1, CUX2 and CACNA1C genes) as the cause of abnormal phenotypes not only in probands but also in their mothers. Other two cases (2/43) had causative variants in the compound heterozygosity ( BLM and NACLN genes), resulting in the clinical manifestation of associated rare, autosomal recessive (AR) disorders. Of two de novo mitochondrial causative variants, one was identified as homoplasmic (~ 95%, MT-ATP6 gene) in the affected individual. Surprisingly, another LP variant ( MT-CO3 gene) was found in ~ 20% heteroplasmy in affected monozygotic twins. The origin of causative variants was not resolved in one case with the occurrence of two causative variants ( TCOF1 and CPA6 genes), since the paternal DNA sample was not available. Two recurrently mutated genes in unrelated index cases were enriched in the list of causative variants, SHANK3 (3 cases) and RAI1 (2 cases). The causative and candidate variants and their pathogenicity are summarized (Additional file ). The set of causative variants was then characterized in terms of their molecular consequences. The highest proportion was made up of truncating variants, including 34.1% (15/44) annotated as frameshift and 27.3% (12/44) of stop-gain (nonsense) variants. Other 25.0% (11/44) variants were categorized as missense, followed by 9.1% (4/44) of splice-site variants, which break a canonical donor splice site (3 cases) or acceptor splice site (1 case). Of the total number of 47 variants, 53.2% of them (25/47) were novel, while the remaining (46.8%, 22/47) have been published in the relevant scientific resources (Fig. a–c). Novel variants The outputs of the variant prioritization led to the identification of novel rare causative variants (53.2%, 25/47) in “OMIM-morbid” genes with the clinical impact in pathogenesis of NDDs. The analytical process included their classification using the guidelines of the American College of Medical Genetics and Genomics (ACMG), in silico analysis using relevant databases and tools, searching for the relevant scientific literature, and finally, the genotype–phenotype correlation using the clinical data (Additional file ). Candidate genes Furthermore, genome-wide approaches including ES may contribute to discovering novel candidate genes. The extensive process of the multistep in silico analysis to identify candidate genes then included: (1) The information obtained from the OMIM database related to terms such as “brain”, “central nervous system” or “development”, (2) The PANTHER™ Classification system to summarize gene ontology analysis, and (3) STRING analysis to elucidate protein–protein association networks. The pathogenicity classification of novel variants was tested using the integrated engines Franklin by Genoox ( https://franklin.genoox.com ) and VarSome . Two novel variants GRIN3B (NM_138690.3):c.931C > T and ASAP1 (NM_018482.4):c.1867C > T were prioritized (Additional files , ). The additional in silico analyses using the PANTHER™ Functional classification including the candidate genes and their top 10 interaction partners were performed based on the STRING Interaction Network. The GRIN3B gene has been assigned in all PANTHER™ Ontologies, suggesting its important role in cellular signalling as a transmembrane signal receptor. Unlike the GRIN3B gene, the ASAP1 gene has been assigned to only three of five PANTHER™ ontologies. Variants of borderline classification of pathogenicity A considerable number of variants is still lacking the conclusive classification in relation to the tested genetic condition and remains on the borderline classification of pathogenicity. The prioritized variants included two variants in candidate genes ( GRIN3B and ASAP1 ; mentioned above), five novel variants in known NDD genes ( HUWE1, KDM3B, CREBBP, TAOK1 and PIK3R1 ), a recurrent variant in the PGK1 gene and a recurrent haplotype in the ZGRF1 gene. Their molecular and clinical consequences of these variants are described in Additional files and . Multiple-hit model and dual diagnoses The genetic heterogeneity of NDDs and MCAs was declared by eight index cases in which more than one causative/possibly causative variant was detected, and one index case with the combination of a recurrent haplotype in the ZGRF1 gene of paternal origin and two non-polymorphic CNVs of maternal origin. The conclusion on their clinical impact on the phenotype was done based on the comprehensive analysis of the molecular mechanism of their pathogenicity, genotype–phenotype correlation, familial segregation of the abnormal phenotypic manifestation and relevant scientific literature. Moreover, the role of the genetic background with possible epistatic interactions and other effects between the affected loci could be an underlying mechanism for the abnormal phenotypic presentation. The cases are listed in Additional file and characterized in detail in Additional files and . Secondary findings Large-scale ES may also uncover P and LP variants not related to the primary diagnosis. Before the study enrolment, the parents/legal guardians were counselled about this possibility to choose to participate or not in this analysis in the informed consent . SFs were found in four genes BRCA1 ( rs80357609 , rs80358002), BRCA2 (rs80359351) , HFE (rs1800562) , TGFBR1 (NM_004612.4:c.1133A > G) in seven individuals (3 index cases and 4 parents) (Additional file ). None of the carriers of SFs have manifested the related conditions so far. Copy-number variations (CNVs) The simultaneous detection of CNVs and genomic regions with long continuous stretches of homozygosity (LCSH) was aimed to verify ES as a compatible and complementary method for CMA and to improve the diagnostic yield beyond the scope of variant calling. An additional diagnostic yield reached 3.3% (3/90) through on the presence of a causative familial intragenic deletion in the GRIN2A gene and de novo deletion in the ZC4H2 gene, both confirmed by qPCR in corresponding families (Additional file a). All but one non-polymorphic CNVs > 100 kb identified by CMA (96%, 26/27) in index cases were detected by ES (Additional file b). Long continuous stretches of homozygosity (LCSH) Based on clinical information, any consanguineous families which should be revealed due to the presence of multiple LCSH were not enrolled in the study. After excluding common LCSHs and manual curation the CMA outputs and SNV/indel analysis using trio ES data, no additional LCSHs harbouring homozygous causative variants were identified. A mosaic LCSH affecting the short arm of chromosome 11 (11p) was uncovered in the index case 44-P as described in our previous study . Parental sample pooling The parental sample pooling was tested as an alternative, cost-effective strategy for trio-based ES in the routine molecular diagnostics. De novo origin of causative variants in index cases ( MED12 , CHD2 , CHD 7, IRF2BPL , RAI1 and BCL11B) as well as the familial segregation ( PBX1, CUX2, GABRB2, CACNA1C and NFIB variants) was then discriminated by Sanger sequencing in corresponding families (Additional file ). Sanger sequencing also specified the individuals in the pooled parental samples carrying P or LP variants in the CFTR gene, and the variant FVL ( F5 gene) due to the increased carrier population frequency. In pooled parental samples, the average alternative allele frequencies (AAFs) of P and LP heterozygous variants (including those for human congenital disorders with AR inheritance) were calculated from the representative sample of P and LP variants. No considerable differences between observed and expected mean AAF were found (Table ). The CNV calling was initially performed in the index cases and parental pools, however, the CNV prioritization was directly done only in index cases. The familial segregation of a rare 19q13.3 microduplication (family 75) and an 18q12.1 microdeletion (family 73) was resolved using qPCR (Additional file a). Family-based ES was performed in 90 paediatric patients with NDDs and MCAs, their parents and/or in their unaffected siblings. Before variant prioritization and analysis, the QC metrics of processed sequencing outputs were calculated and inspected (Additional file ). Briefly, on average, more than 81 million unique reads per sample were mapped to the reference genome GRCh38/hg38 primary assembly. Approximately 98% of targeted bases were covered to at least 30X and median target coverage was calculated as 97X. The average proportion of flagged PCR duplicates was only 14% and the average uniformity reached 1.41 which is a good assumption for CNV analysis. Moreover, no considerable differences in QC metrics between index and pooled samples have been observed (Additional file ). The fraction of all target bases achieving 30X or greater coverage was calculated as 96% of all target bases in index cases (n = 18) and pooled samples (n = 14) as well. in silico functional characterization The effective process of the variant prioritization and interpretation assessed the molecular diagnosis in 48.9% cases (44/90) of paediatric NDDs and associated MCAs. The causative SNVs and indels were identified in 45.6% (41/90) of cases whereas the intragenic CNVs were found in much lower proportion, 3.3% (3/90). The Gene Ontology (GO) analysis using the PANTHER™ Classification system with GO annotation was performed for the characterization of the gene set with causative variants. The set of 41 genes was categorized in five PANTHER™ Ontologies: Molecular Function (output for 10 categories), Biological Process (10 categories), Cellular Component (3 categories), Protein Class (12 categories), Pathway (45 categories) after manual curation. Almost half of genes (20/41) with causative variants are involved in binding and 34.1% (14/41) perform a catalytic activity on a molecular level. The disruption of cellular functioning is predicted due to the causative variants in 65.9% (27/41) of genes and 46.3% (19/41) of genes play a crucial role in metabolic processes. However, 63.4% (26/41) of genes remained uncategorized after the analysis of 177 curated, mostly signalling, pathways in the ontology PANTHER™ Pathway, indicating their broad structural and functional diversity as well as still uncharacterized involvement in the cell structure and functioning. Vice versa, 19.5% (8/41) genes were categorized in at least two pathways (Additional files ; , Sheet 1–5). The PANTHER™ statistical overrepresentation test in these five PANTHER™ Ontologies assessed the overrepresentation of the analysed 41 genes: Biological Process (overrepresentation in 60 categories), Molecular Function (33 categories), Cellular Component (8 categories), Protein Class (one category), Pathway (6 categories including category “unclassified”) and Reactome Pathways (2 categories) after manual curation (Additional file , Sheet 1–5). More than half of causative variants (22/43) were of de novo origin (de novo variants observed in two pairs of monozygotic twins were counted once per one family). Approximately 40% (17/43) were of a familial origin whereas five variants were of paternal origin, including two of them confirmed as mosaic of ~ 11% ( CTNNB1 gene) and ~ 15% ( DYNC1H1 gene), respectively, in paternal DNA samples. The comparable degrees of mosaicism were found in the paternal samples of buccal swabs, ~ 13% for the CTNNB1 gene variant and ~ 10% for the DYNC1H1 gene variant, respectively. Other two variants ( GRIN2A and NFIB genes) were inherited from affected fathers and one variant ( CACNA1A gene) was inherited from apparently unaffected fathers. Two variants ( GCH1 and KCNC3 genes ) and four hemizygous X-linked variants ( EDA, OPHN1 , SLC16A2 and PTCHD1 genes) were inherited from asymptomatic mothers. The detailed analysis of genotype–phenotype correlation suggested four variants of maternal origin ( GABRB2, PBX1, CUX2 and CACNA1C genes) as the cause of abnormal phenotypes not only in probands but also in their mothers. Other two cases (2/43) had causative variants in the compound heterozygosity ( BLM and NACLN genes), resulting in the clinical manifestation of associated rare, autosomal recessive (AR) disorders. Of two de novo mitochondrial causative variants, one was identified as homoplasmic (~ 95%, MT-ATP6 gene) in the affected individual. Surprisingly, another LP variant ( MT-CO3 gene) was found in ~ 20% heteroplasmy in affected monozygotic twins. The origin of causative variants was not resolved in one case with the occurrence of two causative variants ( TCOF1 and CPA6 genes), since the paternal DNA sample was not available. Two recurrently mutated genes in unrelated index cases were enriched in the list of causative variants, SHANK3 (3 cases) and RAI1 (2 cases). The causative and candidate variants and their pathogenicity are summarized (Additional file ). The set of causative variants was then characterized in terms of their molecular consequences. The highest proportion was made up of truncating variants, including 34.1% (15/44) annotated as frameshift and 27.3% (12/44) of stop-gain (nonsense) variants. Other 25.0% (11/44) variants were categorized as missense, followed by 9.1% (4/44) of splice-site variants, which break a canonical donor splice site (3 cases) or acceptor splice site (1 case). Of the total number of 47 variants, 53.2% of them (25/47) were novel, while the remaining (46.8%, 22/47) have been published in the relevant scientific resources (Fig. a–c). The outputs of the variant prioritization led to the identification of novel rare causative variants (53.2%, 25/47) in “OMIM-morbid” genes with the clinical impact in pathogenesis of NDDs. The analytical process included their classification using the guidelines of the American College of Medical Genetics and Genomics (ACMG), in silico analysis using relevant databases and tools, searching for the relevant scientific literature, and finally, the genotype–phenotype correlation using the clinical data (Additional file ). Candidate genes Furthermore, genome-wide approaches including ES may contribute to discovering novel candidate genes. The extensive process of the multistep in silico analysis to identify candidate genes then included: (1) The information obtained from the OMIM database related to terms such as “brain”, “central nervous system” or “development”, (2) The PANTHER™ Classification system to summarize gene ontology analysis, and (3) STRING analysis to elucidate protein–protein association networks. The pathogenicity classification of novel variants was tested using the integrated engines Franklin by Genoox ( https://franklin.genoox.com ) and VarSome . Two novel variants GRIN3B (NM_138690.3):c.931C > T and ASAP1 (NM_018482.4):c.1867C > T were prioritized (Additional files , ). The additional in silico analyses using the PANTHER™ Functional classification including the candidate genes and their top 10 interaction partners were performed based on the STRING Interaction Network. The GRIN3B gene has been assigned in all PANTHER™ Ontologies, suggesting its important role in cellular signalling as a transmembrane signal receptor. Unlike the GRIN3B gene, the ASAP1 gene has been assigned to only three of five PANTHER™ ontologies. Furthermore, genome-wide approaches including ES may contribute to discovering novel candidate genes. The extensive process of the multistep in silico analysis to identify candidate genes then included: (1) The information obtained from the OMIM database related to terms such as “brain”, “central nervous system” or “development”, (2) The PANTHER™ Classification system to summarize gene ontology analysis, and (3) STRING analysis to elucidate protein–protein association networks. The pathogenicity classification of novel variants was tested using the integrated engines Franklin by Genoox ( https://franklin.genoox.com ) and VarSome . Two novel variants GRIN3B (NM_138690.3):c.931C > T and ASAP1 (NM_018482.4):c.1867C > T were prioritized (Additional files , ). The additional in silico analyses using the PANTHER™ Functional classification including the candidate genes and their top 10 interaction partners were performed based on the STRING Interaction Network. The GRIN3B gene has been assigned in all PANTHER™ Ontologies, suggesting its important role in cellular signalling as a transmembrane signal receptor. Unlike the GRIN3B gene, the ASAP1 gene has been assigned to only three of five PANTHER™ ontologies. A considerable number of variants is still lacking the conclusive classification in relation to the tested genetic condition and remains on the borderline classification of pathogenicity. The prioritized variants included two variants in candidate genes ( GRIN3B and ASAP1 ; mentioned above), five novel variants in known NDD genes ( HUWE1, KDM3B, CREBBP, TAOK1 and PIK3R1 ), a recurrent variant in the PGK1 gene and a recurrent haplotype in the ZGRF1 gene. Their molecular and clinical consequences of these variants are described in Additional files and . The genetic heterogeneity of NDDs and MCAs was declared by eight index cases in which more than one causative/possibly causative variant was detected, and one index case with the combination of a recurrent haplotype in the ZGRF1 gene of paternal origin and two non-polymorphic CNVs of maternal origin. The conclusion on their clinical impact on the phenotype was done based on the comprehensive analysis of the molecular mechanism of their pathogenicity, genotype–phenotype correlation, familial segregation of the abnormal phenotypic manifestation and relevant scientific literature. Moreover, the role of the genetic background with possible epistatic interactions and other effects between the affected loci could be an underlying mechanism for the abnormal phenotypic presentation. The cases are listed in Additional file and characterized in detail in Additional files and . Large-scale ES may also uncover P and LP variants not related to the primary diagnosis. Before the study enrolment, the parents/legal guardians were counselled about this possibility to choose to participate or not in this analysis in the informed consent . SFs were found in four genes BRCA1 ( rs80357609 , rs80358002), BRCA2 (rs80359351) , HFE (rs1800562) , TGFBR1 (NM_004612.4:c.1133A > G) in seven individuals (3 index cases and 4 parents) (Additional file ). None of the carriers of SFs have manifested the related conditions so far. The simultaneous detection of CNVs and genomic regions with long continuous stretches of homozygosity (LCSH) was aimed to verify ES as a compatible and complementary method for CMA and to improve the diagnostic yield beyond the scope of variant calling. An additional diagnostic yield reached 3.3% (3/90) through on the presence of a causative familial intragenic deletion in the GRIN2A gene and de novo deletion in the ZC4H2 gene, both confirmed by qPCR in corresponding families (Additional file a). All but one non-polymorphic CNVs > 100 kb identified by CMA (96%, 26/27) in index cases were detected by ES (Additional file b). Based on clinical information, any consanguineous families which should be revealed due to the presence of multiple LCSH were not enrolled in the study. After excluding common LCSHs and manual curation the CMA outputs and SNV/indel analysis using trio ES data, no additional LCSHs harbouring homozygous causative variants were identified. A mosaic LCSH affecting the short arm of chromosome 11 (11p) was uncovered in the index case 44-P as described in our previous study . The parental sample pooling was tested as an alternative, cost-effective strategy for trio-based ES in the routine molecular diagnostics. De novo origin of causative variants in index cases ( MED12 , CHD2 , CHD 7, IRF2BPL , RAI1 and BCL11B) as well as the familial segregation ( PBX1, CUX2, GABRB2, CACNA1C and NFIB variants) was then discriminated by Sanger sequencing in corresponding families (Additional file ). Sanger sequencing also specified the individuals in the pooled parental samples carrying P or LP variants in the CFTR gene, and the variant FVL ( F5 gene) due to the increased carrier population frequency. In pooled parental samples, the average alternative allele frequencies (AAFs) of P and LP heterozygous variants (including those for human congenital disorders with AR inheritance) were calculated from the representative sample of P and LP variants. No considerable differences between observed and expected mean AAF were found (Table ). The CNV calling was initially performed in the index cases and parental pools, however, the CNV prioritization was directly done only in index cases. The familial segregation of a rare 19q13.3 microduplication (family 75) and an 18q12.1 microdeletion (family 73) was resolved using qPCR (Additional file a). Despite the rapid progress in the development and implementation of advanced genomic analyses, the understanding of the aetiology of NDDs remains challenging due to their broad genetic and phenotypic heterogeneity. Nowadays, trio-based ES represents an effective tool to elucidate the molecular genetic diagnosis as well as to uncover novel genetic loci responsible for abnormal phenotypes. It has become an integral part of routine molecular diagnostics algorithms in a growing number of laboratories due to its clinical benefit, cost effectiveness and reduced turnaround time. In this study, a molecular diagnosis was achieved for 44 out of 90 children with NDDs for 85 families (trios or foursomes), resulting in a total diagnostic yield of 48.9%. Pathogenic SNVs and indels were identified in 45.6% (41/90) and causative intragenic CNVs were detected in 3.3% (3/90) of affected children. Generally, trio-based ES resolves the molecular diagnosis in approximately 36% of individuals with NDDs (ranging from 31% for isolated NDDs to 53% for NDDs with associated congenital abnormalities), which greatly exceeds the 15–20% diagnostic rate for CMA . The functional analysis using PANTHER Classification system , which combines gene function, ontology, pathway, and statistical analysis, showed that 41 genes altered by causative variants are involved in fundamental developmental processes and cellular functioning. These functional analyses provided further evidence to the diverse phenotypic effects of causative variants, highlighting the phenotypic heterogeneity of NDDs. De novo variants comprised the highest proportion 51.2% (22/43) of total causative variants detected by trio/foursome-based ES. They were associated with autosomal dominant, X-linked or mitochondrial inheritance for NDDs, confirming the crucial role of affected genes in the development and functioning of the CNS . The loss-of-function causative variants change evolutionary conserved amino acid residues which exhibit an intolerance to variation . Moreover, the genes involved in (neuro) developmental processes are strongly evolutionarily conserved to act in multiple conserved pathways . The familial occurrence of causative variants was uncovered in 15.6% (14/90) of paediatric patients from twelve families in which the phenotypic heterogeneity of NDDs was observed. This heterogeneity was attributed to variable combinations of de novo or familial causative variants (six families) as well as familial segregation of single causative variants with incomplete penetrance and/or variable phenotypic manifestation (six families). Other genetic and non-genetic modifiers and their epistatic interactions can modulate the phenotypic manifestation. Conversely, the dual diagnosis should be considered in cases of co-occurrence of multiple highly penetrant causative variants . The prioritized variants were classified using the Franklin ( https://franklin.genoox.com/ ) and VarSome engines which integrate basic and advanced annotations, a wide variety of in silico prediction tools to obtain pathogenicity scores, population-specific allelic frequencies as well as the default final classification using the ACMG criteria. Additional in silico analyses using NMDEsc Predictor and NMDetective tools were performed for the prediction of the molecular consequences of PTCs. The battery of rules suggests the degradation of aberrant transcripts by NMD or their translation to an altered protein with gain-of-function or dominant-negative effects . Since most PTCs were predicted to initiate the process of NMD, the haploinsufficiency of a particular gene is suggested as the leading molecular mechanism of the related condition. In common, haploinsufficiency of those genes encoding transcription factors and chromatin regulators has been suggested as a mechanism of pathogenesis for ASD and developmental disorders . Another set of in silico prediction tools (Human Splicing Finder, SpliceAI and MutationTaster 2021) served to predict the molecular consequences of splicing variants . However, not only canonical splice site variants may lead to splicing defects and deleterious molecular consequences. Cryptic splice site variants arising from deep intronic variants or apparently benign sequence changes contribute up to 11% of cases of ASD . Variant classification should be perceived as a dynamic process including periodic reanalysis in the context of updates in bioinformatics, novel variant annotations and clinical data as they may be beneficial for an additional 10–15% of individuals without a conclusive diagnosis after the initial ES . Moreover, complementary analyses such transcriptional profiling using RNA sequencing or methylation profiling can be beneficial for those individuals lacking a molecular diagnosis after ES or genome sequencing (GS). Large-scale ES/GS significantly improves the diagnostic standards not only by increasing the diagnostic rate but also by detecting SFs in genes which are not related to the primary indication for the ES/GS. Reporting SFs altering “medically-actionable” genes defined by the ACMG recommendation can result in a profit due to the prevention of life-threatening conditions . The observed yield of SFs, 2.7% (7/261), corresponds to an expected rate of < 3% of individuals who are commonly identified as carriers of at least one reportable SF in one of those genes defined by the ACMG. The proportion of SFs in ES/GS may vary depending on the occurrence of specific variants in founder populations . Referring the SF variants to clinicians is crucial to provide an early intervention and to reduce the life-threatening effects. The integrative ES analysis of causative sequence variants and CNVs resolves the molecular diagnosis in more than 50% of individuals with NDDs . However, the reliability of CNV detection from ES data can be affected by several factors, including the design of the capture kit, sequencing depth and the choice of computational algorithms. In this study, the combination of library preparation using the Human Core Exome kit enriched by spiked-in RefSeq panel and custom spiked-in probes for mtDNA (Twist Biosciences), sequencing using the Illumina NovaSeq 6000 and two different bioinformatic pipelines for CNV detection was proved as an optimal strategy for ensuring data credibility. ES with a coverage of at least 100 × is a suitable approach for the detection of large CNVs as well as intragenic CNVs . The detection of the regions of homozygosity can narrow down the number of prioritized variants and reduce the turnaround time in consanguineous families where large chromosomal segments/haplotypes are transmitted across generations. In common, the identification of multiple LCSH using SNP arrays or ES/GS can indicate a parental consanguinity, which increases the risk of homozygous causative variants and related AR phenotypes . The utility of ES in offspring of consanguineous couples improves the diagnostic yield up to approximately 55% by identifying causative homozygous variants . As a final step in the study, an alternative strategy involving the pooled parental samples was tested in a total of fourteen pools per two or three sex-matched parental samples. This approach shows promise as a cost-effective alternative for routine molecular diagnostics. However, the efficiency of this approach for detecting low-level mosaicism decreases as more parental samples are pooled. Even if Sanger sequencing is applied, its sensitivity in detecting somatic mosaicism is limited to 15–20%. In general, less than 5% of apparently de novo causative variants arise from low-level parental mosaicism (< 10% variant frequency in a tested tissue) . The low-level somatic mosaicism occurs in both parental and maternal samples equally in contrast to a gonadal mosaicism which is strongly disproportionate and prevailing in the paternal germline due to the multiple meiotic divisions during the spermatogenesis. However, the discrimination between a true-positive low-level mosaicism from cross-contamination or background noise remains challenging so far. Therefore, the development of novel computational pipelines is strongly encouraged . To detect CNVs from sequencing depth data in parental samples using the sample pool strategy is not optimal, as accurately detecting these variants in low allelic representation within the pool is methodologically challenging. Thus, verifying the causative CNVs in index and corresponding parental samples through alternative methods such as CMA, MLPA, or qPCR in parental samples, as previously suggested, is necessary . ES significantly improves the diagnostic yield for individuals with unexplained NDDs and associated congenital abnormalities in contrast to standard routine diagnostics approaches. It represents a credible and cost-effective tool for the simultaneous detection of DNA sequence variants and CNVs. Implementation of ES in the diagnostic algorithm can reveal novel candidate genes for NDDs and enhance our understanding of the genetic aetiology behind rare paediatric disorders of neuronal development. Finally, elucidation of the molecular mechanisms involved in the pathogenesis of NDDs would improve genetic counselling, leading to the prevention of medical complications and better utilization of supportive resources. Patient recruitment and sampling The informed consent for this study has been approved by the Research Ethics Committee of Masaryk University and Ethics Committee of University Hospital Brno. The risk of secondary findings (SF) and their clinical impact has been fully explained by the clinical geneticists The legal guardians have been asked to opt in or opt out to receiving SF. Their interpretation and related genetic counselling have been provided by the clinical geneticists. Totally 90 paediatric patients (index cases) from 85 families including 79 trios (affected individuals with parents, 78 cases; or the affected individual and unaffected parent and sibling, one case) and six foursomes (affected siblings with parents, five cases; affected individual with parents and unaffected sibling, one case) were recruited at the Department of The Medical Genetics and Genomics (University Hospital Brno) from May 2020 to December 2022. The index cases were evaluated clinically with inclusion criteria: unexplained severe neurodevelopmental disorder (intellectual disability, autism spectrum disorder or global developmental delay) with possible multiple congenital abnormalities. The age profile of the study cohort and clinical information are summarized in Tables and . The routine cytogenetic analysis of a karyotype and chromosomal microarray analysis (CMA) using CGH or CGH + SNP arrays without conclusive molecular diagnosis preceded ES. Peripheral blood samples were collected in sterile heparinized tubes for cytogenetic analysis. Genomic DNA samples were extracted from 1 ml of peripheral blood using the MagNaPure system (Roche Diagnostics, Basel, Switzerland), LabTurbo Compact System (LabTurbo, Shilin Dist., Taipei City, Taiwan) or phenol–chloroform extraction. Quality control metrics were then assessed using the NanoDrop® ND-1000 (Thermo Fisher Scientific, Inc., Waltham, MA, USA) and the Qubit® 2.0 Fluorometer (Thermo Fisher Scientific, Inc.). The cytogenetic analysis of the karyotype was performed using a routine G-banding procedure, followed by CMA using SurePrint G3 CGH and CGH + SNP Microarray platforms (Agilent Technologies, Inc., Santa Clara, CA, USA), according to the manufacturer's recommendations as described elsewhere . Moreover, the study excluded those cases which were concluded as Fragile X syndrome or those which were elucidated by molecular genetic testing (small- or medium-sized “next generation” sequencing panels or Sanger sequencing). Before the enrolment in the trio ES, the legal guardians (parents) signed an informed consent (approved by the Research Ethics Committee of Masaryk University and Ethics Committee of University Hospital Brno). Exome sequencing High-quality genomic DNA samples of required quantities were used for a library preparation with the Human Core Exome Kit enriched by spiked-in RefSeq panel (Twist Bioscience, San Francisco, CA, USA) and custom spiked-in probes for mtDNA. DNA libraries were then sequenced on the Illumina NovaSeq 6000 (Illumina, Inc., San Diego, CA, USA). The steps of DNA samples processing and sequencing were purchased as a commercially available service (Institute of Applied Biotechnologies, Olomouc, Czech Republic). In the final phase of the study, a subset of parental DNA samples was proposed to test and verify the design of pooled parental samples as suggested before . DNA samples of seventeen index cases were processed to libraries and sequenced as mentioned above. The parental samples were precisely quantified and diluted, if necessary. Their equimolar amounts were mixed to get eight independent pools: four pools per two maternal samples, four pools per two paternal samples, three pools per three maternal samples and three pools per three paternal samples. Then the pooled samples were processed to libraries and sequenced. Bioinformatics processing of ES data Raw sequencing data were processed as described elsewhere . Briefly, the quality control (QC) was performed using the FastQC v0.11.9 (released 8th January 2019; https://www.bioinformatics.babraham.ac.uk/projects/fastqc/ ) and by Picard v2.25.6 (released 15th June 2021; https://broadinstitute.github.io/picard/ ). After the low-quality reads and adapter contamination trimming by the fastp v0.20.1 (released 8th April 2020; https://github.com/OpenGene/fastp/tree/v0.20.1 ) the remaining reads were aligned to the reference human genome hg38 primary assembly by BWA v0.7.17-r1188 with default parameters (released 23rd October 2017; https://github.com/lh3/bwa/tree/v0.7.17 ). Marking duplicate reads and fix mate information was then performed by Picard Toolkit ( http://broadinstitute.github.io/picard/ ). QC steps and the coverage were reviewed using the in-house software Genovesa (Bioxsys, s.r.o., Czech Republic). The single nucleotide variant (SNV) and CNV calling with further prioritization were described previously and in the Additional file . In the parental sample pooling design, VCF files of index cases and corresponding parental pools (containing parental samples for the given index case) were merged to streamline the variant prioritization and to assess the parental segregation of familial variants. The heterozygous calls in pooled samples were expected to be in lower proportions of reads than observed in non-pooled samples. The expected percentage of carrying the heterozygous call in a pool (N) was calculated using the formula [12pt]{minimal} $$N= 100\%$$ N = 1 n p a r e n t s x 2 a l l e l e s × 100 % . With a uniform enrichment of targeted regions and a median sequencing depth of 100X, an average heterozygous call in 25% (two samples per pool) and 16.67% (three samples per pool) of mapped reads was expected, respectively. The pooled parental samples were added to index cases to normalize read counts to produce the pooled reference for CNV calling. Therefore, the CNVs were called by only the optimized in-house bioinformatics pipeline (using Genovesa), prioritized only in index cases, and defined by technical thresholds with reads ratio ≤ 0.7 for losses and ≥ 1.3 for gains. Gene-set in silico analysis The in silico analysis using PANTHER™ Classification system v17.0 (released 23rd February 2022; http://www.pantherdb.org/ ) was used for the functional classification of gene set with reported causative (pathogenic and likely pathogenic; P and LP) variants. The gene set was then loaded into the web interface for the statistical overrepresentation test (with false discovery rate, FDR, p < 0.05). The gene interactions were then studied using the STRING Interaction Network v11.5 (released 12th August 2021; https://string-db.org/ ) for candidate genes with novel variants . First, the analysis was run to specify top 10 predicted interaction partners with the default settings as follows: Network Type: full String interaction network; Required score: medium confidence (0.400); Size cut-off: no more than 10 interactors. The candidate gene and its top 10 predicted interaction partners were loaded into the web interface for in silico analysis using PANTHER™ Classification system with default setting for both functional classification and statistical overrepresentation test. The overrepresentation test was performed against the reference list represented by all genes in the database for Homo sapiens (20,589 genes). Only outputs with FDR p < 0.05 were considered for further evaluation. Sanger sequencing After the variant prioritization, clinically relevant sequence variants (de novo , or inherited) were verified using Sanger sequencing as described elsewhere . Sanger sequencing also served for the determination of the sample carrying P or LP variants including SF in parental sample pools. Quantitative real-time PCR (qPCR) The non-polymorphic and/or clinically relevant CNVs that were below the detection limit of microarray analysis were verified using qPCR with custom-designed primers as described elsewhere . The reactions were run in duplicates for the index case, parents, and a commercially available reference DNA sample (Agilent Technologies) using the Power SYBR Green PCR Master Mix and default cycling conditions following the manufacturer’s recommendations (Thermo Fisher Scientific). The ERH gene served as an endogenous control. The relative quantification was assessed from C t values using the calculation of R-values (R = 2 −ΔΔCt ). The R-value cut-offs were set at < 0.7 for DNA losses and > 1.3 for DNA gains . The informed consent for this study has been approved by the Research Ethics Committee of Masaryk University and Ethics Committee of University Hospital Brno. The risk of secondary findings (SF) and their clinical impact has been fully explained by the clinical geneticists The legal guardians have been asked to opt in or opt out to receiving SF. Their interpretation and related genetic counselling have been provided by the clinical geneticists. Totally 90 paediatric patients (index cases) from 85 families including 79 trios (affected individuals with parents, 78 cases; or the affected individual and unaffected parent and sibling, one case) and six foursomes (affected siblings with parents, five cases; affected individual with parents and unaffected sibling, one case) were recruited at the Department of The Medical Genetics and Genomics (University Hospital Brno) from May 2020 to December 2022. The index cases were evaluated clinically with inclusion criteria: unexplained severe neurodevelopmental disorder (intellectual disability, autism spectrum disorder or global developmental delay) with possible multiple congenital abnormalities. The age profile of the study cohort and clinical information are summarized in Tables and . The routine cytogenetic analysis of a karyotype and chromosomal microarray analysis (CMA) using CGH or CGH + SNP arrays without conclusive molecular diagnosis preceded ES. Peripheral blood samples were collected in sterile heparinized tubes for cytogenetic analysis. Genomic DNA samples were extracted from 1 ml of peripheral blood using the MagNaPure system (Roche Diagnostics, Basel, Switzerland), LabTurbo Compact System (LabTurbo, Shilin Dist., Taipei City, Taiwan) or phenol–chloroform extraction. Quality control metrics were then assessed using the NanoDrop® ND-1000 (Thermo Fisher Scientific, Inc., Waltham, MA, USA) and the Qubit® 2.0 Fluorometer (Thermo Fisher Scientific, Inc.). The cytogenetic analysis of the karyotype was performed using a routine G-banding procedure, followed by CMA using SurePrint G3 CGH and CGH + SNP Microarray platforms (Agilent Technologies, Inc., Santa Clara, CA, USA), according to the manufacturer's recommendations as described elsewhere . Moreover, the study excluded those cases which were concluded as Fragile X syndrome or those which were elucidated by molecular genetic testing (small- or medium-sized “next generation” sequencing panels or Sanger sequencing). Before the enrolment in the trio ES, the legal guardians (parents) signed an informed consent (approved by the Research Ethics Committee of Masaryk University and Ethics Committee of University Hospital Brno). High-quality genomic DNA samples of required quantities were used for a library preparation with the Human Core Exome Kit enriched by spiked-in RefSeq panel (Twist Bioscience, San Francisco, CA, USA) and custom spiked-in probes for mtDNA. DNA libraries were then sequenced on the Illumina NovaSeq 6000 (Illumina, Inc., San Diego, CA, USA). The steps of DNA samples processing and sequencing were purchased as a commercially available service (Institute of Applied Biotechnologies, Olomouc, Czech Republic). In the final phase of the study, a subset of parental DNA samples was proposed to test and verify the design of pooled parental samples as suggested before . DNA samples of seventeen index cases were processed to libraries and sequenced as mentioned above. The parental samples were precisely quantified and diluted, if necessary. Their equimolar amounts were mixed to get eight independent pools: four pools per two maternal samples, four pools per two paternal samples, three pools per three maternal samples and three pools per three paternal samples. Then the pooled samples were processed to libraries and sequenced. Raw sequencing data were processed as described elsewhere . Briefly, the quality control (QC) was performed using the FastQC v0.11.9 (released 8th January 2019; https://www.bioinformatics.babraham.ac.uk/projects/fastqc/ ) and by Picard v2.25.6 (released 15th June 2021; https://broadinstitute.github.io/picard/ ). After the low-quality reads and adapter contamination trimming by the fastp v0.20.1 (released 8th April 2020; https://github.com/OpenGene/fastp/tree/v0.20.1 ) the remaining reads were aligned to the reference human genome hg38 primary assembly by BWA v0.7.17-r1188 with default parameters (released 23rd October 2017; https://github.com/lh3/bwa/tree/v0.7.17 ). Marking duplicate reads and fix mate information was then performed by Picard Toolkit ( http://broadinstitute.github.io/picard/ ). QC steps and the coverage were reviewed using the in-house software Genovesa (Bioxsys, s.r.o., Czech Republic). The single nucleotide variant (SNV) and CNV calling with further prioritization were described previously and in the Additional file . In the parental sample pooling design, VCF files of index cases and corresponding parental pools (containing parental samples for the given index case) were merged to streamline the variant prioritization and to assess the parental segregation of familial variants. The heterozygous calls in pooled samples were expected to be in lower proportions of reads than observed in non-pooled samples. The expected percentage of carrying the heterozygous call in a pool (N) was calculated using the formula [12pt]{minimal} $$N= 100\%$$ N = 1 n p a r e n t s x 2 a l l e l e s × 100 % . With a uniform enrichment of targeted regions and a median sequencing depth of 100X, an average heterozygous call in 25% (two samples per pool) and 16.67% (three samples per pool) of mapped reads was expected, respectively. The pooled parental samples were added to index cases to normalize read counts to produce the pooled reference for CNV calling. Therefore, the CNVs were called by only the optimized in-house bioinformatics pipeline (using Genovesa), prioritized only in index cases, and defined by technical thresholds with reads ratio ≤ 0.7 for losses and ≥ 1.3 for gains. in silico analysis The in silico analysis using PANTHER™ Classification system v17.0 (released 23rd February 2022; http://www.pantherdb.org/ ) was used for the functional classification of gene set with reported causative (pathogenic and likely pathogenic; P and LP) variants. The gene set was then loaded into the web interface for the statistical overrepresentation test (with false discovery rate, FDR, p < 0.05). The gene interactions were then studied using the STRING Interaction Network v11.5 (released 12th August 2021; https://string-db.org/ ) for candidate genes with novel variants . First, the analysis was run to specify top 10 predicted interaction partners with the default settings as follows: Network Type: full String interaction network; Required score: medium confidence (0.400); Size cut-off: no more than 10 interactors. The candidate gene and its top 10 predicted interaction partners were loaded into the web interface for in silico analysis using PANTHER™ Classification system with default setting for both functional classification and statistical overrepresentation test. The overrepresentation test was performed against the reference list represented by all genes in the database for Homo sapiens (20,589 genes). Only outputs with FDR p < 0.05 were considered for further evaluation. After the variant prioritization, clinically relevant sequence variants (de novo , or inherited) were verified using Sanger sequencing as described elsewhere . Sanger sequencing also served for the determination of the sample carrying P or LP variants including SF in parental sample pools. The non-polymorphic and/or clinically relevant CNVs that were below the detection limit of microarray analysis were verified using qPCR with custom-designed primers as described elsewhere . The reactions were run in duplicates for the index case, parents, and a commercially available reference DNA sample (Agilent Technologies) using the Power SYBR Green PCR Master Mix and default cycling conditions following the manufacturer’s recommendations (Thermo Fisher Scientific). The ERH gene served as an endogenous control. The relative quantification was assessed from C t values using the calculation of R-values (R = 2 −ΔΔCt ). The R-value cut-offs were set at < 0.7 for DNA losses and > 1.3 for DNA gains . Additional file 1: Quality Control (QC) metrics for outputs from exome sequencing. Additional file 2: Functional Classification Analysis using the PANTHER TM Classification system. Additional file 3: The Gene Ontology (GO) analysis using the PANTHER TM Classification system. Additional file 4: PANTHER Overrepresentation Test for the gene set with causative variants. Additional file 5: Molecular characterization of causative variants and variants with borderline classification of pathogenicity. Additional file 6: Molecular characterization of novel candidate variants in NDD genes and novel variants in the candidate genes and their clinical consequences. Additional file 7: Molecular characterization of variants with borderline classification of pathogenicity (VUS-LP) and their clinical consequences. Additional file 8: Molecular characterization of secondary findings in the “medically-actionable” genes on the ACMG list. Additional file 9: a) Non-polymorphic intragenic CNVs detected by ES, b) Non-polymorphic CNVs initially detected by CMA and then verified by ES. Additional file 10: The SNV and CNV detection and prioritization.
Evaluation of the level of information of pediatricians about the diagnosis and management of cryptorchidism
f15a50ba-4d35-4b11-bb6a-a36c0db40c01
11662753
Pediatrics[mh]
Cryptorchidism is the most common genitourinary anomaly in male infants, and it is defined as a testicle located outside the scrotum and at any point in its normal migration path. The incidence is variable and depends on factors such as gestational age, affecting 1.0–4.6 % of term infants and 1.1–45 % of preterm neonates. According to the Information System on Live Births (SINASC), in Brazil in 2020, 444 undescended testes were registered, corresponding to 1.88 % of the congenital anomalies reported in the same year. Apparently, the prevalence of this disease is increasing, but this data is possibly related to the increased survival of extremely premature and small-for-gestational-age babies. Cryptorchidism may be associated with disorders of sexual development and congenital malformation, but it is mainly found as an isolated malformation in up to 85 % of cases. It is known that testes descent is related to factors such as testicular enlargement, increased intra-abdominal pressure, hormonal action, and growth of the cranial part of the abdomen moving away from the future pelvic region. When this migration does not occur during pregnancy, it can still happen in the first six months of life due to hormonal activity. Hence, intervention is not recommended before this age. Regarding the complications associated with cryptorchidism, a reduction of germ cells has been observed in patients with cryptorchidism after one year of age. Also, there is a greater risk of developing germ cell tumors in adolescent patients. It is known that men with a history of this disorder have an increased risk of cancer. Studies point to an increased incidence of malignancy in cryptorchid testes ranging from 49/100,000 (0.05 %) to 12/1075 (1 %). , The diagnosis is clinical, and a thorough pediatric genital physical examination is sufficient to detect cryptorchidism. Ultrasonography is not recommended, as this method does not reliably differentiate cryptorchidism from retractile testicles, wasting resources and potentially delaying surgical correction. , Surgery is considered more effective than hormones and is recommended for babies whose testicles did not descend until six months of age. , Depending on the location of the testicle, a specific surgical approach is indicated. In cases of abdominal testes, laparoscopy helps in diagnosis and therapy , . However, it is not certain if the information regarding the best age to operate has reached the pediatricians, who are the first to diagnose an undescended testis and refer the patient to the surgeons. Aim This study aims to investigate the level of information pediatricians have about the subject This study aims to investigate the level of information pediatricians have about the subject A cross-sectional observational study was designed to investigate the management of undescended testes by health professionals attending to children. A set of questions was prepared on the diagnosis and management of cryptorchidism. Therefore, the final form was applied via "Google Forms," containing 15 questions, with only one correct alternative. The protocol was submitted and approved by the Local Ethics Committee (CAAE 47,886,321.6.0000.5404). This form was sent to pediatricians and pediatric residents, members of the Brazilian Society of Pediatrics (SBP). The invitation letter with the link to the form was sent to the participants via email by the SBP. According to the SBP mailing report, 18,577 emails were sent, of which only 29,1 % were opened. A total of 762 participants answered the form, with 16 duplicated responses, 13 non-pediatrician participants, and 5 participants did not accept the Informed Consent Form, so these participants did not respond to the form, totaling 728 answers. Initially, the responses were stored in a Microsoft Excel spreadsheet, and the graphics provided by the Google Form platform were recorded. A statistical study used the IBM SPSS version 22 computer program to describe the variables. The present study revealed that, regarding the profile of the participants, there was a predominance of participants who declared themselves to be pediatricians (87.4 %), with 10.2 % residents in pediatrics and 2.5 % residents in pediatric specialties. Regarding the years of training of the participants, there was a slight predominance of those with more than 30 years of training (26.5 %). Approximately half of the interviewees are not linked to a pediatrics teaching institution. Among those who declared having a link with an educational institution, 23.1 % are medical assistants, 16.6 % work as professors, and 10.6 % are residents of these institutions. Most of the participants came from the southeast of the country. Most participants stated that they work in both public and private networks. In the block of general questions on the topic, the frequencies described below in were recorded, and it was possible to observe that most participants selected the alternative that corresponded to the most consensual answers between societies. However, two main survey points were highlighted in the final analysis of the data. The first concerns the frequency of professionals requesting complementary exams to diagnose cryptorchidism, with 79 % of participants indicating using ultrasound to confirm the diagnosis. Another point that drew our attention was the ideal age for referral. The survey results indicated that only a little more than half of the professionals consulted are aware of the ideal age for referral . When analyzing the answers to the question about the ideal age for referral, it can be seen that responders indicating six months as the ideal age for surgery were predominantly those professionals having less than five and more than 30 years of practice and those linked to an educational institution. Nevertheless, when the authors analyzed the data regarding the ideal age pediatricians consider suitable for operating, we observed that more than half of them chose alternatives with a different age range from 6 months to 12 months of life, as seen in . Data from the present survey indicates that nearly 40 % of pediatricians still believe that the ideal age for treating cryptorchidism may exceed 12 months of age and also that almost 80 % still rely on the use of ultrasound to confirm the diagnosis. Diagnosis of cryptorchidism is clinical and depends on adequate access to health services and the technical capacity of the examiner. However, 79.4 % of the research participants responded that they use ultrasound as diagnostic support. Only 20.1 % stated that there is no need for complementary exams because the physical exam is enough for the diagnosis. The use of complementary exams, such as ultrasonography (US), is not recommended because this method does not reliably differentiate cryptorchidism from other diagnoses and does not influence the conduct, surgical approach, or evaluation of the viability of the testes involved, and neither does it rule out an intra-abdominal testicle, being a waste of resources that may lead to a delay in surgical correction. A retrospective study from Ottawa, Canada, concluded that the referral of patients with suspected undescended testis should not be accompanied by ultrasound, as it is unnecessary and misleading, in addition to consuming health resources. A prospective study by the University of Toronto revealed that ultrasound performed poorly as a diagnostic tool in detecting palpable undescended testes in boys, with a specificity of only 16 %. In this setting, radiological tests have a specificity of 44 %, usually lower than physical examination, which reaches 84 % specificity when performed by a pediatric urologist. Although magnetic resonance has greater sensitivity and specificity, it is an expensive test that is not widely available and requires sedation in pediatric patients. , Pediatricians' performance is essential for timely diagnosis and referral to surgery. Due to the adverse clinical outcomes, it is crucial that the diagnosis be made as early as possible and that, ideally, it takes place in the delivery room. Most of the participants consulted in the survey also considered that the pediatrician should examine the newborn's testicles for the first time in the delivery room itself, accounting for 93.5 % of responses in this item. In Brazil, Ordinance Number 31 of February 15, 1993, of the Ministry of Health directs a pediatrician or neonatologist's assessment of the newborn in the delivery room until the newborn is transferred to the care of the multidisciplinary team or rooming-in. Therefore, it is up to these professionals to complete a physical examination of the newborn. Pediatricians need adequate training to identify cryptorchidism and other congenital anomalies and offer appropriate treatment earlier. In addition, the position of non-palpable testicles at birth should be reassessed in the eighth week of life and at three months of life. Misdiagnosis and late referral seem to be a widespread problem. A University of Texas study concluded that most 121 patients referred to a pediatric urologist for cryptorchidism were referred after 12 months of life, and only half of the patients presented cryptorchidism. Orchiopexy is recommended between 6 and 12 months, or a maximum of 18 months, by most societies. , In the second edition of the Brazilian Treaty of Pediatrics, published in 2010, there was already a recommendation for orchidopexy at 12 months of life. This ideal age range was determined from the histological analysis of testicular tissue and the effects on fertility according to the time the correction was performed. Also, there is evidence of better results of the average tubular fertility index and the germ cell count in patients operated on before the first year of life. , The present survey shows that only 47.6 % of professionals indicated six to twelve months of life as the ideal age for surgery. This data underscores the lack of up-to-date information in almost half of the consulted pediatricians. Not surprisingly, younger pediatricians (graduates of less than ten years) responded with more correct answers. Regarding the main objective for performing orchidopexy, 88 % of professionals indicate the procedure to reduce the incidence of testicular tumors and ensure the maintenance of sperm production. Although all options bring proven benefits from this surgery, the main objective of the procedure is to provide global testicular function, in addition to other benefits, such as the prevention of trauma. Professionals must consider these benefits to prioritize early diagnosis and provide the patient and his family with relevant information. The surgical approach is considered more effective than the use of hormones since the therapies that use hormone treatment are based on low-grade scientific evidence studies that do not assess the heterogeneity of patients, the location of the testicle, the hormone dose, and the lack of long-term studies. In addition, using hormones has short-term side effects such as scrotal erythema, pigmentation, induction of pubic hair, and penile growth, although these tend to regress with interruption of treatment. Therefore, although it has been used in special situations, like, for instance, bilateral chriptorchism, hormone therapy is currently not recommended. , , In the present survey, 92.3 % of professionals did not recommend hormone therapy. This study has some limitations. The results herein expressed should be regarded with caution because the number of respondents (although a large number) represents less than 10 % of the total number of pediatricians in Brazil. Also, pediatricians who are interested in the subject might be over-represented in the population study resulting in a selection bias. Despite this, due to the overall distribution of the responders, it is believed that the results reflect roughly the present state of knowledge among these professionals about cryptorchidism. Another important limitation is that the questions and the resulting answers are not applicable to acquired cryptorchism which is a different (although not less important) clinical entity, that should also be recognized by every pediatrician. The results of this survey indicate that pediatricians' knowledge of the diagnosis and management of cryptorchidism is outdated and does not include the more current practices. These results show the importance of maintaining periodic update programs for pediatricians in general, involving educational institutions, medical societies, and health professionals. The authors declare no conflicts of interest.
Fast & furious: Rejecting the hypothesis that secondary psychopathy improves reaction time-based concealed information detection
800f5d76-8957-4c8c-a18f-98681da916a3
11478853
Forensic Medicine[mh]
Lying is an intrinsic feature of human behavior . We all lie and we have all been lied to . When people are asked to discriminate between truth and lie based on their perceptions, they correctly notice lies in about 47% of cases and classify truths as nondeceptive in about 61% of cases–which is close to chance level . Hence, it’s not surprising that throughout history humans have sought for techniques and methods that can distinguish between truth and lie . In ancient Israel, for instance, a woman accused of adultery was considered guilty if her belly swelled after drinking "bitter water" . In ancient China, those accused of fraud had to hold dry rice in their mouths–if the rice stayed dry, they were deemed guilty . These historical methods, though lacking scientific validation, hint at a connection between physiological changes and deception. Building upon this understanding, psychophysiological methods for lie detection, popularly known as "polygraphs", emerged in the early twentieth century . Such lie detection tools generally rely on physiological reactivity . Importantly, the difference between the various lie detection methods lies in the adopted paradigm and the way its questions are formulated [see ]. The classical, and probably most influential method, is the Control Question Test [CQT; ]. This test assumes that guilty examinees will show stronger physiological responses to relevant, e.g., crime-related, questions, whereas innocents will show stronger physiological responses to control questions . However, in real police investigations, both guilty (liars) and innocent (truth tellers) suspects may quickly identify the relevant questions and become emotionally aroused by them . As a result, both types of subjects (guilty and innocent) may show enhanced physiological responses to the relevant questions, making accurate classification difficult . Consequently, it is not surprising that the scientific community has criticized this method for being biased against the innocent in addition to lacking a theoretical basis . Indeed, many criminal investigations have been hindered by the unreliable results of the CQT. For instance, consider the infamous Green River Killer case, which began in 1982 with the discovery of five bodies in the Green River, Washington. In this case, Melvin Foster, a taxi driver, failed a CQT despite his innocence. It wasn’t until 2001 that DNA evidence implicated Gary Ridgway, who was ultimately convicted of 49 murders. Remarkably, Ridgway passed a CQT in 1984 . Lykken (1959) was one of the first to question the existence of specific deception reactions and, hence, he developed the Guilty Knowledge Test [GKT; ]. Today, the GKT is called the Concealed Information Test [CIT; ] and is considered a well-validated diagnostic test that aims to detect concealed knowledge . In this test, examinees are faced with several multiple-choice questions, each followed by one probe (e.g., crime-related) item and several irrelevant alternatives, which are similar to the probe . For instance, in the Green River Killer case, the body of the first victim, Wendy Lee Coffield, was pulled from the river with a pair of blue jeans knotted around her neck . An appropriate CIT question could have been: "What article of clothing was tied around the victim’s neck?" (a) black sweater; (b) purple shirt; (c) blue jeans; (d) red scarf; (e) green jacket. Importantly, knowledgeable suspects recognize the significant probes, leading to differential physiological and behavioral responses. Unknowledgeable suspects, on the other hand, cannot distinguish between probe and irrelevant items and respond uniformly to all items . Interestingly, while CIT researchers traditionally relied on autonomic physiological measures like heart rate, skin conductance, and brain responses, recent studies have incorporated behavioral measures such as reaction time . The RT-based CIT is designed according to the 3-stimulus protocol and includes, in addition to probe and irrelevant items, a third item type known as the “target stimulus” . These targets ensure stimulus-processing as they require a unique response . Specifically, participants are typically asked to judge the stimuli on familiarity and are instructed to press buttons with the captions "familiar" (for targets) versus "unfamiliar” [for probe and irrelevant items , ]. The task-required "unfamiliar" response to probe items is presumed to create a response conflict . Such response conflict may be resolved by inhibiting the automatic “familiar” response, which requires time . Hence, response conflict has been theorized to underlie the longer RTs for probe versus irrelevant items–i.e., the RT-CIT effect . Several studies provide direct support for the role of response conflict. Suchotzki et al . (2018), for instance, reasoned that since conflict arises when one denies familiarity with the known probe items, conflict should be stronger when one relies more heavily on familiarity. To explore this hypothesis, the authors manipulated familiarity-based responding by: (1) increasing the number of different targets (4 instead of 2 newly learned targets); and (2) using more familiar targets (2 personally relevant instead of 2 newly learned targets). Both manipulations increased the RT-CIT effect, supporting the response conflict account. Moreover, Suchotzki et al . (2015) instructed participants to admit knowledge of half the probes and deny knowledge of the remaining half. Their findings showed that overt deception, which generates response conflict, was essential for both the RT-CIT effect and the activation of the right inferior frontal gyrus, a brain region associated with inhibition . Interestingly, a recent study has provided support for the crucial role of conflict, however, also suggests that additional factors such as orientation to significant information contribute to the RT-CIT effect . Beyond theoretical considerations, meta-analytic research has demonstrated that the RT-CIT is a highly valid method for detecting concealed information. Nevertheless, it remains to be assessed how the RT-CIT is affected by different personality traits, such as the constellation of traits associated with psychopathy . This is especially relevant considering that psychopathic individuals constitute a significant proportion of the incarcerated population, with prevalence ranging from 20% to 30% . Notably, classical dual-factor models of psychopathy distinguish between primary and secondary variants . Secondary psychopathy, which is characterized by disinhibition and impulsivity, holds particular relevance in the context of the RT-CIT . Specifically, a diminished ability to inhibit responses and manage response conflict should lead to an elevated RT-CIT effect. Only a few studies have examined the influence of psychopathy on the CIT and found a significant CIT effect for psychopaths, which did not differ from that of non-psychopaths. However, these studies relied on physiological responses rather than RT . RT serves as a behavioral measure and is assumed to reflect a different cognitive mechanism. Specifically, while the autonomic CIT effects have been tied to either orienting or arousal inhibition [see – ], the RT-CIT effect has primarily been associated with response conflict . As outlined above, efficient conflict resolution requires adept inhibition capacities, which may be compromised by secondary psychopathic tendencies . Therefore, the objective of the present study was to examine whether the RT-based CIT is sensitive to secondary psychopathic traits in a student sample. To get a fuller comprehension of this relationship, we used a novel CIT protocol which features no-go trials to assess disinhibition (see Method). This study was approved by the Ethics Review Board of the Criminology department of Bar-Ilan University (BIU; January 26 th , 2023; see Ethics Review Board approval on https://osf.io/s5mrn/ ) and was performed in accordance with the relevant guidelines and regulations. The methods of this study, including sample size determination and exclusion criteria, were pre-registered on: https://osf.io/hz58u . Participants A total of 100 BIU students (79% female) were recruited through BIU’s online research portal (i.e., SONA). Participants’ average age was 23.88 years ( SD = 2.3, range = 20–37). All participants signed an informed consent form. At the end of the experiment, each participant received one credit point. All data of fourteen participants were excluded: thirteen participants were excluded because they made more than 50% errors to either target, probe or irrelevant items, and one participant was excluded because s/he did not complete the entire CIT (< 336 trials). Accordingly, the final sample included 86 participants (81.4% female, average age = 23.83, SD = 2.3, range = 20–37). As indicated in the pre-registration, we stopped data collection when we reached N = 100, since the Bayes Factor (BF) provided substantial evidence for the null hypothesis (i.e., BF 01 > 5; there is no linear association between the RT-CIT effect and secondary psychopathic traits). Materials The present study included (1) the Levenson’s Self-Report Psychopathy (LSRP) scale, which provided the psychopathy scores; (2) a Go/No-go RT-CIT, which provided the RT-CIT effect as well as a behavioral measure of response inhibition (i.e., the no-go error rate; as explained below); and (3) the Barratt Impulsiveness Scale (BIS-11), which provided the impulsivity scores. LSRP Psychopathic traits within our student sample were assessed using the LSRP . The LSRP contains a total of 26 items, rated on a four-point Likert scale from “disagree strongly” to “agree strongly", resulting in a total score range from 26 to 104. Developed specifically for non-forensic populations, the LSRP distinguishes between primary and secondary psychopathy, aligning with the original Psychopathy Checklist–Revised (PCL-R) factors . The primary psychopathy subscale (16-items; range: 16–64) evaluates interpersonal and affective features of psychopathy, while the secondary psychopathy subscale (10-items; range: 10–40) assesses impulsivity and antisocial lifestyle . The overall scale’s reliability typically falls within the range of 0.59 to 0.87; for the primary subscale Cronbach’s alpha ranges from 0.74 to 0.86, and for the secondary subscale, it ranges from 0.61 to 0.71 . In the current study, Cronbach’s alpha values were 0.79 for the overall LSRP, 0.8 for the primary subscale, and 0.63 for the secondary subscale. This study used a Hebrew translated version of the LSRP . Go/No-go RT-CIT The Go/No-go task is widely used in psychology as a measure of inhibition and impulsivity . Therefore, the present experiment integrated this task within the RT-CIT–i.e., this study relied on a Go/No-go RT-CIT with both go and no-go trials. The regular CIT items–probes, irrelevants and targets–played the role of ’go’ items, to which participants had to respond by pressing a button. Specifically, a “unfamiliar” button for probes and irrelevants, but a “familiar” button for targets (as is common in the RT-CIT). When seeing the no-go items, participants were asked not to respond. Importantly, these no-go items were used to measure participants’ capacity for response inhibition, which is assumed to be compromised in secondary psychopathy . BIS-11 In addition to measuring response inhibition capacity with the novel no-go trials, we assessed impulsivity using the Barratt Impulsiveness Scale [BIS-11; ]. The BIS-11 is a self-report questionnaire which contains a total of 30 items that are rated on a four-point Likert scale ranging from “rarely/never” to “almost always" . Cronbach’s alpha for the BIS-11 typically falls within the range of 0.69 to 0.83 . In the current study, Cronbach’s alpha was 0.84. This study used a Hebrew translated version of the BIS-11 . Procedure The experiment was built in PsychoPy and performed online in ’Pavlovia’ (see script on https://osf.io/s5mrn/ ). Participants received a link to the experiment through SONA (i.e., BIU’s online research portal). Importantly, once participants finished the experiment, SONA prevented them from performing the experiment again. The experiment contained three main stages: (1) the LSRP questionnaire, (2) the RT-CIT, and (3) the BIS-11 and subjective ratings. Before starting the experiment, participants read and approved an informed consent form (by pressing a button). Stage 1 The LSRP questionnaire was completed after signing the informed consent form. All items (a total of 26) were presented one by one, and participants were asked to rate their agreement for each item, on a scale from 1 (“disagree strongly”) to 4 (“agree strongly"). Stage 2 Before starting the actual CIT, participants were presented with two item-lists, one of last names, and one of first names (female names for women and male names for men). Each list contained 16 items (i.e., names). Participants were asked to mark a maximum of 12 names, from each list, that have a special meaning for them. The irrelevant items (for the CIT) were chosen randomly from the words that were not marked. Then, participants were explained about the upcoming CIT and motivated to conceal their autobiographical items (i.e., the probe items). To increase motivation, participants read a short paragraph which states that the upcoming task is difficult, and that only highly intelligent people with a strong willpower can successfully conceal. In addition, to become familiar with the no-go items, Tiger and Zebra, participants read a short paragraph about these items (i.e., Two animals with spectacularly beautiful stripes patterns are the Tiger (part of the Felidae family) with black-orange stripes, and of course, the Zebra (part of the Equidae family) with black-and-white stripes). Similarly, to become familiar with the target items, Caesarea and Milan, participants read a short paragraph about these cities (i.e., Who has not heard about the city of Milan, which is located in northern Italy and known for its great wealth? And of course, there is no one who does not know the city of Caesarea that was established 2000 years ago by the Roman Empire!). Thus, the CIT items were divided into three semantic categories, names for probes and irrelevants, cities for targets, and animals for no-go items. In total, there were 14 distinct items: 2 probes (participants first name and participants last name), 8 irrelevants (4 other first names and 4 other last names), 2 targets (Caesarea and Milan), and 2 no-go items (Tiger and Zebra). The RT-CIT was operated according to the multiple-probes-protocol (MPP), which means that all 14 items were intermixed in each block of the CIT (there were 4 blocks in total). Per block, each item was presented 6 times, and thus, each block contained 84 items (14 x 6 = 84). The entire experiment contained 336 items (84 items x 4 blocks = 336). The order of items’ presentation was determined randomly, with the following restriction: two consecutive presentations of the same item were not allowed. All stimuli were displayed in a serial manner, in the middle of the screen, for 1500ms. Between each two items, a symbol of a plus was presented; this inter stimulus interval (ISI) was either 250ms, 500ms, or 750ms [similar to , , , ]. On top of the items, participants also saw the question: "Is this word familiar to you"? Participants were requested to respond using one of two buttons: unfamiliar (i.e., “I”) for probes and irrelevant items, familiar (i.e., “E”) for targets . In addition, when seeing a no-go item, participants were requested not to respond. During ’go’ trials only, two feedback messages in the form of red words could briefly appear above the item for 200ms: (1) "WRONG" if participants pressed the wrong button, and (2) "TOO SLOW", if 800ms passed since the item appeared and no button was pressed [similar to , , , , , – ]. For a visual presentation, please see . Importantly, the actual RT-CIT was also preceded by three successive practice phases that familiarized participants with the test procedure. These practice phases were repeated until certain criteria were met (as detailed below). In the first practice phase, which included solely “go” trials, items remained on the screen until one of the two available buttons (“E” or “I”) was pressed. If participants pressed the wrong button, they received "WRONG" feedback. In the second practice phase, which included both “go” and “no-go” trials, items remained on the screen until a button was pressed or until 1500ms had elapsed. Similar to the first practice phase, participants received "WRONG" feedback in case of an incorrect response. In the last practice phase, participants also received "TOO SLOW" feedback if they failed to press any button within 800ms during “go” trials. Please note that participants were able to advance through each phase if they met the following three criteria: (1) a maximum of 50% errors (i.e., incorrect button presses), (2) a maximum of 20% of RTs falling under 150ms, and (3) a mean reaction time that did not exceed 800ms. If participants did not meet these criteria, they received feedback about their performance (i.e., "Sorry, you failed this practice phase. Please repeat the training") and had to perform the practice phase again (up to a maximum of two attempts). Stage 3 After the CIT, participants were asked to complete the BIS-11 questionnaire. All items (a total of 30) were presented one by one, and participants were asked to rate their agreement for each item, on a scale from 1 (“rarely/never”) to 4 (“almost always"). Finally, after the BIS-11, participants were asked to complete four parts to summarize their experience in the experiment. First, they were asked to rate the significance level of the 2 probes, 2 targets, 8 irrelevants, and 2 no-go items on a scale from 1 ("not significant at all") to 9 ("extremely significant"). These ratings were obtained to examine (and ensure) that the selected probes were more significant than the irrelevant items. Second, participants were asked to rate how motivated they were to succeed in the test, on a scale from 1 ("not motivated at all") to 10 ("very motivated"). Third, participants were asked to rate how impulsive they think they were during the CIT, on a scale from 1 ("not impulsive at all") to 10 ("very impulsive"). Fourth, participants were presented with a list of countermeasures, and were asked to mark the options they used. If they didn’t use any countermeasures, they could mark the option "No countermeasures were used". At the end of the experiment, participants were thanked for their participation and granted their credit points. Outliers and exclusions Single items were excluded according to the following criteria: (1) Each button press under 150ms; (2) Each button press above 800ms; (3) Each error of pressing the wrong button. Moreover, the data of an entire participant were excluded when: (1) The participant made at least 50% errors (in go trials of the CIT) to any of the 3 stimulus types (probe, irrelevant, target); (2) The participant did not complete the entire CIT (< 336 trials). Accordingly, all data of fourteen participants were excluded (see Participants). A total of 100 BIU students (79% female) were recruited through BIU’s online research portal (i.e., SONA). Participants’ average age was 23.88 years ( SD = 2.3, range = 20–37). All participants signed an informed consent form. At the end of the experiment, each participant received one credit point. All data of fourteen participants were excluded: thirteen participants were excluded because they made more than 50% errors to either target, probe or irrelevant items, and one participant was excluded because s/he did not complete the entire CIT (< 336 trials). Accordingly, the final sample included 86 participants (81.4% female, average age = 23.83, SD = 2.3, range = 20–37). As indicated in the pre-registration, we stopped data collection when we reached N = 100, since the Bayes Factor (BF) provided substantial evidence for the null hypothesis (i.e., BF 01 > 5; there is no linear association between the RT-CIT effect and secondary psychopathic traits). The present study included (1) the Levenson’s Self-Report Psychopathy (LSRP) scale, which provided the psychopathy scores; (2) a Go/No-go RT-CIT, which provided the RT-CIT effect as well as a behavioral measure of response inhibition (i.e., the no-go error rate; as explained below); and (3) the Barratt Impulsiveness Scale (BIS-11), which provided the impulsivity scores. LSRP Psychopathic traits within our student sample were assessed using the LSRP . The LSRP contains a total of 26 items, rated on a four-point Likert scale from “disagree strongly” to “agree strongly", resulting in a total score range from 26 to 104. Developed specifically for non-forensic populations, the LSRP distinguishes between primary and secondary psychopathy, aligning with the original Psychopathy Checklist–Revised (PCL-R) factors . The primary psychopathy subscale (16-items; range: 16–64) evaluates interpersonal and affective features of psychopathy, while the secondary psychopathy subscale (10-items; range: 10–40) assesses impulsivity and antisocial lifestyle . The overall scale’s reliability typically falls within the range of 0.59 to 0.87; for the primary subscale Cronbach’s alpha ranges from 0.74 to 0.86, and for the secondary subscale, it ranges from 0.61 to 0.71 . In the current study, Cronbach’s alpha values were 0.79 for the overall LSRP, 0.8 for the primary subscale, and 0.63 for the secondary subscale. This study used a Hebrew translated version of the LSRP . Go/No-go RT-CIT The Go/No-go task is widely used in psychology as a measure of inhibition and impulsivity . Therefore, the present experiment integrated this task within the RT-CIT–i.e., this study relied on a Go/No-go RT-CIT with both go and no-go trials. The regular CIT items–probes, irrelevants and targets–played the role of ’go’ items, to which participants had to respond by pressing a button. Specifically, a “unfamiliar” button for probes and irrelevants, but a “familiar” button for targets (as is common in the RT-CIT). When seeing the no-go items, participants were asked not to respond. Importantly, these no-go items were used to measure participants’ capacity for response inhibition, which is assumed to be compromised in secondary psychopathy . BIS-11 In addition to measuring response inhibition capacity with the novel no-go trials, we assessed impulsivity using the Barratt Impulsiveness Scale [BIS-11; ]. The BIS-11 is a self-report questionnaire which contains a total of 30 items that are rated on a four-point Likert scale ranging from “rarely/never” to “almost always" . Cronbach’s alpha for the BIS-11 typically falls within the range of 0.69 to 0.83 . In the current study, Cronbach’s alpha was 0.84. This study used a Hebrew translated version of the BIS-11 . Psychopathic traits within our student sample were assessed using the LSRP . The LSRP contains a total of 26 items, rated on a four-point Likert scale from “disagree strongly” to “agree strongly", resulting in a total score range from 26 to 104. Developed specifically for non-forensic populations, the LSRP distinguishes between primary and secondary psychopathy, aligning with the original Psychopathy Checklist–Revised (PCL-R) factors . The primary psychopathy subscale (16-items; range: 16–64) evaluates interpersonal and affective features of psychopathy, while the secondary psychopathy subscale (10-items; range: 10–40) assesses impulsivity and antisocial lifestyle . The overall scale’s reliability typically falls within the range of 0.59 to 0.87; for the primary subscale Cronbach’s alpha ranges from 0.74 to 0.86, and for the secondary subscale, it ranges from 0.61 to 0.71 . In the current study, Cronbach’s alpha values were 0.79 for the overall LSRP, 0.8 for the primary subscale, and 0.63 for the secondary subscale. This study used a Hebrew translated version of the LSRP . The Go/No-go task is widely used in psychology as a measure of inhibition and impulsivity . Therefore, the present experiment integrated this task within the RT-CIT–i.e., this study relied on a Go/No-go RT-CIT with both go and no-go trials. The regular CIT items–probes, irrelevants and targets–played the role of ’go’ items, to which participants had to respond by pressing a button. Specifically, a “unfamiliar” button for probes and irrelevants, but a “familiar” button for targets (as is common in the RT-CIT). When seeing the no-go items, participants were asked not to respond. Importantly, these no-go items were used to measure participants’ capacity for response inhibition, which is assumed to be compromised in secondary psychopathy . In addition to measuring response inhibition capacity with the novel no-go trials, we assessed impulsivity using the Barratt Impulsiveness Scale [BIS-11; ]. The BIS-11 is a self-report questionnaire which contains a total of 30 items that are rated on a four-point Likert scale ranging from “rarely/never” to “almost always" . Cronbach’s alpha for the BIS-11 typically falls within the range of 0.69 to 0.83 . In the current study, Cronbach’s alpha was 0.84. This study used a Hebrew translated version of the BIS-11 . The experiment was built in PsychoPy and performed online in ’Pavlovia’ (see script on https://osf.io/s5mrn/ ). Participants received a link to the experiment through SONA (i.e., BIU’s online research portal). Importantly, once participants finished the experiment, SONA prevented them from performing the experiment again. The experiment contained three main stages: (1) the LSRP questionnaire, (2) the RT-CIT, and (3) the BIS-11 and subjective ratings. Before starting the experiment, participants read and approved an informed consent form (by pressing a button). Stage 1 The LSRP questionnaire was completed after signing the informed consent form. All items (a total of 26) were presented one by one, and participants were asked to rate their agreement for each item, on a scale from 1 (“disagree strongly”) to 4 (“agree strongly"). Stage 2 Before starting the actual CIT, participants were presented with two item-lists, one of last names, and one of first names (female names for women and male names for men). Each list contained 16 items (i.e., names). Participants were asked to mark a maximum of 12 names, from each list, that have a special meaning for them. The irrelevant items (for the CIT) were chosen randomly from the words that were not marked. Then, participants were explained about the upcoming CIT and motivated to conceal their autobiographical items (i.e., the probe items). To increase motivation, participants read a short paragraph which states that the upcoming task is difficult, and that only highly intelligent people with a strong willpower can successfully conceal. In addition, to become familiar with the no-go items, Tiger and Zebra, participants read a short paragraph about these items (i.e., Two animals with spectacularly beautiful stripes patterns are the Tiger (part of the Felidae family) with black-orange stripes, and of course, the Zebra (part of the Equidae family) with black-and-white stripes). Similarly, to become familiar with the target items, Caesarea and Milan, participants read a short paragraph about these cities (i.e., Who has not heard about the city of Milan, which is located in northern Italy and known for its great wealth? And of course, there is no one who does not know the city of Caesarea that was established 2000 years ago by the Roman Empire!). Thus, the CIT items were divided into three semantic categories, names for probes and irrelevants, cities for targets, and animals for no-go items. In total, there were 14 distinct items: 2 probes (participants first name and participants last name), 8 irrelevants (4 other first names and 4 other last names), 2 targets (Caesarea and Milan), and 2 no-go items (Tiger and Zebra). The RT-CIT was operated according to the multiple-probes-protocol (MPP), which means that all 14 items were intermixed in each block of the CIT (there were 4 blocks in total). Per block, each item was presented 6 times, and thus, each block contained 84 items (14 x 6 = 84). The entire experiment contained 336 items (84 items x 4 blocks = 336). The order of items’ presentation was determined randomly, with the following restriction: two consecutive presentations of the same item were not allowed. All stimuli were displayed in a serial manner, in the middle of the screen, for 1500ms. Between each two items, a symbol of a plus was presented; this inter stimulus interval (ISI) was either 250ms, 500ms, or 750ms [similar to , , , ]. On top of the items, participants also saw the question: "Is this word familiar to you"? Participants were requested to respond using one of two buttons: unfamiliar (i.e., “I”) for probes and irrelevant items, familiar (i.e., “E”) for targets . In addition, when seeing a no-go item, participants were requested not to respond. During ’go’ trials only, two feedback messages in the form of red words could briefly appear above the item for 200ms: (1) "WRONG" if participants pressed the wrong button, and (2) "TOO SLOW", if 800ms passed since the item appeared and no button was pressed [similar to , , , , , – ]. For a visual presentation, please see . Importantly, the actual RT-CIT was also preceded by three successive practice phases that familiarized participants with the test procedure. These practice phases were repeated until certain criteria were met (as detailed below). In the first practice phase, which included solely “go” trials, items remained on the screen until one of the two available buttons (“E” or “I”) was pressed. If participants pressed the wrong button, they received "WRONG" feedback. In the second practice phase, which included both “go” and “no-go” trials, items remained on the screen until a button was pressed or until 1500ms had elapsed. Similar to the first practice phase, participants received "WRONG" feedback in case of an incorrect response. In the last practice phase, participants also received "TOO SLOW" feedback if they failed to press any button within 800ms during “go” trials. Please note that participants were able to advance through each phase if they met the following three criteria: (1) a maximum of 50% errors (i.e., incorrect button presses), (2) a maximum of 20% of RTs falling under 150ms, and (3) a mean reaction time that did not exceed 800ms. If participants did not meet these criteria, they received feedback about their performance (i.e., "Sorry, you failed this practice phase. Please repeat the training") and had to perform the practice phase again (up to a maximum of two attempts). Stage 3 After the CIT, participants were asked to complete the BIS-11 questionnaire. All items (a total of 30) were presented one by one, and participants were asked to rate their agreement for each item, on a scale from 1 (“rarely/never”) to 4 (“almost always"). Finally, after the BIS-11, participants were asked to complete four parts to summarize their experience in the experiment. First, they were asked to rate the significance level of the 2 probes, 2 targets, 8 irrelevants, and 2 no-go items on a scale from 1 ("not significant at all") to 9 ("extremely significant"). These ratings were obtained to examine (and ensure) that the selected probes were more significant than the irrelevant items. Second, participants were asked to rate how motivated they were to succeed in the test, on a scale from 1 ("not motivated at all") to 10 ("very motivated"). Third, participants were asked to rate how impulsive they think they were during the CIT, on a scale from 1 ("not impulsive at all") to 10 ("very impulsive"). Fourth, participants were presented with a list of countermeasures, and were asked to mark the options they used. If they didn’t use any countermeasures, they could mark the option "No countermeasures were used". At the end of the experiment, participants were thanked for their participation and granted their credit points. Outliers and exclusions Single items were excluded according to the following criteria: (1) Each button press under 150ms; (2) Each button press above 800ms; (3) Each error of pressing the wrong button. Moreover, the data of an entire participant were excluded when: (1) The participant made at least 50% errors (in go trials of the CIT) to any of the 3 stimulus types (probe, irrelevant, target); (2) The participant did not complete the entire CIT (< 336 trials). Accordingly, all data of fourteen participants were excluded (see Participants). The LSRP questionnaire was completed after signing the informed consent form. All items (a total of 26) were presented one by one, and participants were asked to rate their agreement for each item, on a scale from 1 (“disagree strongly”) to 4 (“agree strongly"). Before starting the actual CIT, participants were presented with two item-lists, one of last names, and one of first names (female names for women and male names for men). Each list contained 16 items (i.e., names). Participants were asked to mark a maximum of 12 names, from each list, that have a special meaning for them. The irrelevant items (for the CIT) were chosen randomly from the words that were not marked. Then, participants were explained about the upcoming CIT and motivated to conceal their autobiographical items (i.e., the probe items). To increase motivation, participants read a short paragraph which states that the upcoming task is difficult, and that only highly intelligent people with a strong willpower can successfully conceal. In addition, to become familiar with the no-go items, Tiger and Zebra, participants read a short paragraph about these items (i.e., Two animals with spectacularly beautiful stripes patterns are the Tiger (part of the Felidae family) with black-orange stripes, and of course, the Zebra (part of the Equidae family) with black-and-white stripes). Similarly, to become familiar with the target items, Caesarea and Milan, participants read a short paragraph about these cities (i.e., Who has not heard about the city of Milan, which is located in northern Italy and known for its great wealth? And of course, there is no one who does not know the city of Caesarea that was established 2000 years ago by the Roman Empire!). Thus, the CIT items were divided into three semantic categories, names for probes and irrelevants, cities for targets, and animals for no-go items. In total, there were 14 distinct items: 2 probes (participants first name and participants last name), 8 irrelevants (4 other first names and 4 other last names), 2 targets (Caesarea and Milan), and 2 no-go items (Tiger and Zebra). The RT-CIT was operated according to the multiple-probes-protocol (MPP), which means that all 14 items were intermixed in each block of the CIT (there were 4 blocks in total). Per block, each item was presented 6 times, and thus, each block contained 84 items (14 x 6 = 84). The entire experiment contained 336 items (84 items x 4 blocks = 336). The order of items’ presentation was determined randomly, with the following restriction: two consecutive presentations of the same item were not allowed. All stimuli were displayed in a serial manner, in the middle of the screen, for 1500ms. Between each two items, a symbol of a plus was presented; this inter stimulus interval (ISI) was either 250ms, 500ms, or 750ms [similar to , , , ]. On top of the items, participants also saw the question: "Is this word familiar to you"? Participants were requested to respond using one of two buttons: unfamiliar (i.e., “I”) for probes and irrelevant items, familiar (i.e., “E”) for targets . In addition, when seeing a no-go item, participants were requested not to respond. During ’go’ trials only, two feedback messages in the form of red words could briefly appear above the item for 200ms: (1) "WRONG" if participants pressed the wrong button, and (2) "TOO SLOW", if 800ms passed since the item appeared and no button was pressed [similar to , , , , , – ]. For a visual presentation, please see . Importantly, the actual RT-CIT was also preceded by three successive practice phases that familiarized participants with the test procedure. These practice phases were repeated until certain criteria were met (as detailed below). In the first practice phase, which included solely “go” trials, items remained on the screen until one of the two available buttons (“E” or “I”) was pressed. If participants pressed the wrong button, they received "WRONG" feedback. In the second practice phase, which included both “go” and “no-go” trials, items remained on the screen until a button was pressed or until 1500ms had elapsed. Similar to the first practice phase, participants received "WRONG" feedback in case of an incorrect response. In the last practice phase, participants also received "TOO SLOW" feedback if they failed to press any button within 800ms during “go” trials. Please note that participants were able to advance through each phase if they met the following three criteria: (1) a maximum of 50% errors (i.e., incorrect button presses), (2) a maximum of 20% of RTs falling under 150ms, and (3) a mean reaction time that did not exceed 800ms. If participants did not meet these criteria, they received feedback about their performance (i.e., "Sorry, you failed this practice phase. Please repeat the training") and had to perform the practice phase again (up to a maximum of two attempts). After the CIT, participants were asked to complete the BIS-11 questionnaire. All items (a total of 30) were presented one by one, and participants were asked to rate their agreement for each item, on a scale from 1 (“rarely/never”) to 4 (“almost always"). Finally, after the BIS-11, participants were asked to complete four parts to summarize their experience in the experiment. First, they were asked to rate the significance level of the 2 probes, 2 targets, 8 irrelevants, and 2 no-go items on a scale from 1 ("not significant at all") to 9 ("extremely significant"). These ratings were obtained to examine (and ensure) that the selected probes were more significant than the irrelevant items. Second, participants were asked to rate how motivated they were to succeed in the test, on a scale from 1 ("not motivated at all") to 10 ("very motivated"). Third, participants were asked to rate how impulsive they think they were during the CIT, on a scale from 1 ("not impulsive at all") to 10 ("very impulsive"). Fourth, participants were presented with a list of countermeasures, and were asked to mark the options they used. If they didn’t use any countermeasures, they could mark the option "No countermeasures were used". At the end of the experiment, participants were thanked for their participation and granted their credit points. Single items were excluded according to the following criteria: (1) Each button press under 150ms; (2) Each button press above 800ms; (3) Each error of pressing the wrong button. Moreover, the data of an entire participant were excluded when: (1) The participant made at least 50% errors (in go trials of the CIT) to any of the 3 stimulus types (probe, irrelevant, target); (2) The participant did not complete the entire CIT (< 336 trials). Accordingly, all data of fourteen participants were excluded (see Participants). All data were pre-processed using Matlab R2022b (The MathWorks, Natick, MA). Thereafter, data analyses were performed using JASP statistical program [ , version 0.17.2.]. The analysis plan was pre-registered on: https://osf.io/hz58u , and the data along with analysis scripts can be accessed at: https://osf.io/s5mrn/ . Subjective ratings Prior to testing the main hypothesis (i.e., correlation between the RT-CIT effect and secondary psychopathic traits), we analyzed the subjective ratings which were obtained after the CIT (these analyses were not pre-registered). First, we analyzed (1) participants self-reported motivation to conceal their identity during the CIT, and (2) participants self-reported impulsivity during the CIT (in both cases, scale ranged from 1–10). Both the motivation to conceal ( M = 7.71, SD = 2.2) and experienced impulsivity ( M = 6.22, SD = 2.16) were high. Second, we analyzed the self-reported significance of probe and irrelevant items (scale ranged from 1–9). As expected, the significance of probes ( M = 8.58, SD = 1.21) was higher than the significance of irrelevants ( M = 1.94, SD = 1.1); t (85) = 36.01, p < .001, d = 3.88, BF 10 = 9.440 × 10 +49 . Third, we analyzed the reported countermeasures: 9% of participants reported that they tried to distract themselves; 14% reported that they tried to answer faster to the probe items (i.e., their own name); 1% reported that they tried to answer more slowly to probes; 2% reported that they tried to answer without looking at the screen; and 70% reported that they did not use any countermeasures. Main analyses For the main analysis, we computed for each participant the RT-CIT effect, which is defined as the mean RT of probes minus the mean RT of irrelevants. As we relied on a modified RT-CIT with ‘no-go’ trials, we first compared the mean RT-CIT effect across participants (i.e., 55 ms; see also ) to 0. A statistically significant difference was observed, t (85) = 15.7, p < .001, d = 1.69 (95% CI = [1.36, 2.02]), which was very strongly supported by the BF 10 = 9.527×10 +23 . To test the main hypothesis, we correlated the individual RT-CIT effects with the secondary LSRP scores. Contrary to the research hypothesis, no significant correlation was observed: r = 0.04, p = 0.725, BF₀₁ = 6.98 (see ). This result suggests that there is no linear association between the RT-CIT effect and secondary psychopathy (the null hypothesis is ~7 times more likely than the alternative hypothesis). Please note that similar results are obtained when including the data of the fourteen excluded participants: r = 0.03, p = 0.74, BF₀₁ = 7.5. Moreover, as can be seen in , support for the null hypothesis increased as data accumulated. To further examine the relationship between the RT-CIT effect, psychopathy, inhibition and impulsivity, we also correlated the RT-CIT effect with the total LSRP score, primary LSRP score, No-go error rate, and the BIS-11 score. Consistent with the main results reported above, which support the null hypothesis, no significant correlations were found with the RT-CIT effect (see Tables and ). Notably, in a non-preregistered exploratory analysis, we performed a Bayesian Analysis of Covariance with Primary LSRP, Secondary LSRP, BIS-11, No-go errors, and Gender as predictors, and the RT-CIT effect as dependent variable. Using the BF Inclusion metric, we compared all models including a particular predictor to those without the predictor [see ]. The Inclusion BF for Secondary LSRP was 0.134 (note that similar values were observed for other predictors). This analysis further supports our main conclusion: there is no discernible linear relationship between secondary psychopathic traits and the RT-CIT effect (full results are available on the OSF at https://osf.io/s5mrn/ ). ROC analysis As we relied on a novel CIT protocol, the area under the ROC curve (AUC) was calculated to measure the detection efficiency of classifying participants as unknowledgeable (naïve) or knowledgeable based on their individual probe-irrelevant score (i.e., the dCIT). The dCIT is computed by subtracting the mean RT of irrelevants from the mean RT of probes and dividing this difference by the standard deviation of irrelevant RTs . As there were no naïve participants in the current experiment, their data were simulated. This simulation procedure is based on the assumption that naïve participants, in contrast to knowledgeable ones, cannot distinguish between probe and irrelevant items and therefore there is no reason to expect that probes would elicit systematic differential RTs. Thus, for naïve participants, the expected mean value of dCIT is 0. The standard deviation of dCIT was estimated using the following formula: N − 1 N − 3 * 4 N 1 δ 2 8 , where N is the total sample size and δ is the true effect size in the population, which is 0 in this case [e.g., ]. Further, it was assumed that the data of individual naïve participants are distributed normally. Hence, a simulated dataset was created by taking n random samples from the normal distribution (with a mean of 0 and a SD computed as explained above). This simulation procedure, as well as the computation of the AUC, were repeated 1000 times using a bootstrapping procedure. These 1000 bootstrapped AUCs were then used to compute the mean AUC and its 95% confidence interval (CI). Accordingly, the mean AUC of our novel Go/No-go CIT was 0.92 (95% CI = [0.91, 0.94]), exceeding the average area (0.82) reported in the review paper by Meijer et al . (2016). In sum, the novel CIT paradigm demonstrated impressive detection efficiency. However, contrary to our expectations, we observed no significant correlation between the RT-CIT effect and secondary psychopathic traits (BF 01 = 6.98). This finding is further corroborated by the absence of significant correlations between the RT-CIT effect and both impulsivity (as measured by the BIS-11; BF₀₁ = 3.14) and response inhibition capacity (assessed by the no-go error rate; BF₀₁ = 3.08). Prior to testing the main hypothesis (i.e., correlation between the RT-CIT effect and secondary psychopathic traits), we analyzed the subjective ratings which were obtained after the CIT (these analyses were not pre-registered). First, we analyzed (1) participants self-reported motivation to conceal their identity during the CIT, and (2) participants self-reported impulsivity during the CIT (in both cases, scale ranged from 1–10). Both the motivation to conceal ( M = 7.71, SD = 2.2) and experienced impulsivity ( M = 6.22, SD = 2.16) were high. Second, we analyzed the self-reported significance of probe and irrelevant items (scale ranged from 1–9). As expected, the significance of probes ( M = 8.58, SD = 1.21) was higher than the significance of irrelevants ( M = 1.94, SD = 1.1); t (85) = 36.01, p < .001, d = 3.88, BF 10 = 9.440 × 10 +49 . Third, we analyzed the reported countermeasures: 9% of participants reported that they tried to distract themselves; 14% reported that they tried to answer faster to the probe items (i.e., their own name); 1% reported that they tried to answer more slowly to probes; 2% reported that they tried to answer without looking at the screen; and 70% reported that they did not use any countermeasures. For the main analysis, we computed for each participant the RT-CIT effect, which is defined as the mean RT of probes minus the mean RT of irrelevants. As we relied on a modified RT-CIT with ‘no-go’ trials, we first compared the mean RT-CIT effect across participants (i.e., 55 ms; see also ) to 0. A statistically significant difference was observed, t (85) = 15.7, p < .001, d = 1.69 (95% CI = [1.36, 2.02]), which was very strongly supported by the BF 10 = 9.527×10 +23 . To test the main hypothesis, we correlated the individual RT-CIT effects with the secondary LSRP scores. Contrary to the research hypothesis, no significant correlation was observed: r = 0.04, p = 0.725, BF₀₁ = 6.98 (see ). This result suggests that there is no linear association between the RT-CIT effect and secondary psychopathy (the null hypothesis is ~7 times more likely than the alternative hypothesis). Please note that similar results are obtained when including the data of the fourteen excluded participants: r = 0.03, p = 0.74, BF₀₁ = 7.5. Moreover, as can be seen in , support for the null hypothesis increased as data accumulated. To further examine the relationship between the RT-CIT effect, psychopathy, inhibition and impulsivity, we also correlated the RT-CIT effect with the total LSRP score, primary LSRP score, No-go error rate, and the BIS-11 score. Consistent with the main results reported above, which support the null hypothesis, no significant correlations were found with the RT-CIT effect (see Tables and ). Notably, in a non-preregistered exploratory analysis, we performed a Bayesian Analysis of Covariance with Primary LSRP, Secondary LSRP, BIS-11, No-go errors, and Gender as predictors, and the RT-CIT effect as dependent variable. Using the BF Inclusion metric, we compared all models including a particular predictor to those without the predictor [see ]. The Inclusion BF for Secondary LSRP was 0.134 (note that similar values were observed for other predictors). This analysis further supports our main conclusion: there is no discernible linear relationship between secondary psychopathic traits and the RT-CIT effect (full results are available on the OSF at https://osf.io/s5mrn/ ). As we relied on a novel CIT protocol, the area under the ROC curve (AUC) was calculated to measure the detection efficiency of classifying participants as unknowledgeable (naïve) or knowledgeable based on their individual probe-irrelevant score (i.e., the dCIT). The dCIT is computed by subtracting the mean RT of irrelevants from the mean RT of probes and dividing this difference by the standard deviation of irrelevant RTs . As there were no naïve participants in the current experiment, their data were simulated. This simulation procedure is based on the assumption that naïve participants, in contrast to knowledgeable ones, cannot distinguish between probe and irrelevant items and therefore there is no reason to expect that probes would elicit systematic differential RTs. Thus, for naïve participants, the expected mean value of dCIT is 0. The standard deviation of dCIT was estimated using the following formula: N − 1 N − 3 * 4 N 1 δ 2 8 , where N is the total sample size and δ is the true effect size in the population, which is 0 in this case [e.g., ]. Further, it was assumed that the data of individual naïve participants are distributed normally. Hence, a simulated dataset was created by taking n random samples from the normal distribution (with a mean of 0 and a SD computed as explained above). This simulation procedure, as well as the computation of the AUC, were repeated 1000 times using a bootstrapping procedure. These 1000 bootstrapped AUCs were then used to compute the mean AUC and its 95% confidence interval (CI). Accordingly, the mean AUC of our novel Go/No-go CIT was 0.92 (95% CI = [0.91, 0.94]), exceeding the average area (0.82) reported in the review paper by Meijer et al . (2016). In sum, the novel CIT paradigm demonstrated impressive detection efficiency. However, contrary to our expectations, we observed no significant correlation between the RT-CIT effect and secondary psychopathic traits (BF 01 = 6.98). This finding is further corroborated by the absence of significant correlations between the RT-CIT effect and both impulsivity (as measured by the BIS-11; BF₀₁ = 3.14) and response inhibition capacity (assessed by the no-go error rate; BF₀₁ = 3.08). The present study examined the relation between the RT-CIT effect and secondary psychopathy in a student sample. The RT-CIT effect has been suggested to be largely driven by response conflict . Specifically, the need to classify familiar probes as “unfamiliar” induces a conflict. This conflict may be resolved by inhibiting the automatic “familiar” response, a process that consumes time and consequently slows down RT. Hence, it was hypothesized that individuals with higher secondary psychopathic traits, marked by impulsivity and impaired inhibition capacity, would produce larger RT-CIT effects compared to individuals with lower levels of secondary psychopathic traits. Secondary psychopathic traits were measured using the LSRP questionnaire and correlated with the RT-CIT effect. Notably, both the mean score and reliability of the different LSRP scales were consistent with other reports in the literature . Moreover, the mean RT-CIT effect was large and significantly different from 0 (Cohen’s d = 1.69; BF 10 = 9.527×10 +23 ). However, contrary to our hypothesis, no significant correlation between secondary psychopathy and the CIT effect was observed, as supported by the Bayesian analysis that revealed substantial evidence for the null hypothesis (BF 01 = 6.98). These findings are in line with those of Verschuere and in ´t Hout (2016), who examined the cognitive cost of lying among psychopaths using a Sheffield lie test (which measures deception, not concealed information). Similar to the present study, no significant correlation was found between psychopathy and the RT effect (RT LIE minus RT TRUTH ). Moreover, the current findings are in accordance with findings of CIT studies that used physiological measures and revealed no effect of psychopathy on the CIT . To delve deeper into our primary research question, we included two additional measures: impulsivity and response inhibition capacity. Impulsivity was assessed using the BIS-11 questionnaire, and although we found a significant correlation between impulsivity and secondary psychopathy, no significant correlation was observed between impulsivity and the RT-CIT effect [consistent with ]. It is noteworthy that self-reports and behavioral measures (like the RT-CIT) typically yield weak correlations [ – ). Hence, to measure response inhibition capacity, we integrated a Go/No-go task within the CIT. However, consistent with our main findings, response inhibition capacity (as indicated by the no-go error rate) did not correlate with secondary psychopathy or the RT-CIT effect (please see ). Thus, the present study suggests that secondary psychopathy does not influence the RT-CIT effect. This conclusion should, however, be approached with caution for two primary reasons. Firstly, while our hypothesis was built on the premise that secondary psychopathy is marked by impulsivity and impaired response inhibition capacity, our measures of inhibition and secondary psychopathy did not correlate. This may be due to our non-forensic student sample. While studies utilizing non-forensic samples have generally shown no correlation between psychopathy and response inhibition capacity, studies involving forensic samples have demonstrated such a correlation [e.g., vs. ]. Secondly, our inhibition and CIT effect measures did not correlate. Although the integration of the Go/No-go task within the RT-CIT is unique to our study, few previous CIT studies have used “secondary response inhibition tasks”. For example, Ambach et al . (2008) included the Go/No-go task alongside the CIT (with different stimuli for each task, unlike the present study) and Suchotzki et al . (2019) introduced a Stroop task after the CIT. Both studies showed similar results to the present one–no significant correlation between response inhibition capacity and the RT-CIT effect. Ultimately, this raises the question of whether response conflict is the only mechanism underlying the RT-CIT. Accordingly, as indicated previously, a recent study of klein Selle et al . (2023) has provided support for the idea that additional factors may contribute to the RT-CIT effect. These authors compared a conflict condition (where the response buttons emphasized familiarity) with a no conflict condition (where the response buttons emphasized categorical membership). Although conflict strengthened the RT-CIT effect, the effect was significant even in the no conflict condition. Therefore, it was suggested that conflict theory alone is not a sufficient account of the RT-CIT effect and that other mechanisms such as orientation may play a role. The orienting response entails reflexive behavioral and physiological responses to changes in the environment . This response is primarily modulated by two key factors: the novelty of the stimulus and its perceived significance . In the context of the CIT, probe items are both significant and novel (i.e., presented less frequently) for knowledgeable individuals. Hence, these items should elicit an enhanced orienting response. Such enhanced responses to significant probe items [see – ] may briefly interrupt ongoing behavior and consequently lengthen RTs. This notion is supported by a limited number of CIT studies. For instance, Lukács et al . (2019) categorized stimuli into three salience levels [forename, birthday, and favorite animal, from highest to lowest; ] and found a larger RT-CIT effect for more significant items . Suchotzki et al . (2015) manipulated the proportion of probe versus irrelevant items and found a stronger RT-CIT effect for more novel probes . Interestingly, when comparing our RT-CIT effect to that of a classical CIT study , which used a similar design, stimuli and was also performed online, a significant difference was observed. Specifically, the RT-CIT effect of our novel Go/No-go CIT, Cohen’s d = 1.69 (95% CI = [1.36, 2.02]), was significantly larger than that of the classical CIT study, Cohen’s d = 1.24 (95% CI [0.90; 1.57]). Although the BF (BF 10 = 1.64) provides only weak evidence for this difference, a Bayesian sequential analysis showed increasing evidence for the alternative hypothesis as data accumulates (suggesting that more data should be obtained). Similarly, the Cohen’s d (1.69, 95% CI = [1.36, 2.02]) observed in the present study is higher than the mean Cohen’s d (1.30, 95% CI [1.06; 1.54]) reported in the meta-analysis of Suchotzki et al . (2017). Moreover, the current AUC value (0.92), which indicates detection efficiency of knowledgeable and unknowledgeable individuals, exceeds the mean AUC value (0.82) reported in the review paper by Meijer et al . (2016). Together this suggests that the additional “no-go” trials in our novel Go/No-go CIT may have increased CIT detection efficiency. The observed increase in CIT detection efficiency may be the result of heightened cognitive load, a factor previously shown to enhance the RT-CIT effect . For example, Visu-Petra et al . (2013) compared three CIT conditions: a classical RT-CIT, a RT-CIT with a concurrent memory task, and a RT-CIT with a concurrent set-shifting task. In line with the idea that additional cognitive load increases CIT detection efficiency, the RT-CIT effect was higher in the conditions that included an additional task (as evidenced by a larger increase in probe RTs than irrelevant RTs). Similarly, the no-go trials of our Go/No-go RT-CIT likely raised cognitive load, thereby reducing the capacity for inhibitory control and conflict resolution. Moreover, the additional no-go items may have also (1) made it harder to correctly respond to the different types of stimuli, thereby increasing conflict, and (2) diminished the relative frequency of probes, thereby amplifying the orienting response. As both conflict and orienting have been suggested to underlie the RT-CIT effect [see ], it can explain how our modified format increased detection efficiency. Future investigations should aim to directly compare this novel format with a classical RT-CIT. Additionally, while we strictly adhered to our preregistered protocol, future studies should aim to address several methodological limitations of the present study. First, as previously mentioned, the use of a non-forensic student sample may have influenced our findings. Therefore, investigating how more diverse samples could yield different results is essential. Moreover, conducting the experiment online may have influenced the RT-CT effect and, consequently, potentially affected the observed relationship between the RT-CIT effect and secondary psychopathy. Hence, replication studies conducted in a controlled laboratory setting are crucial [see ]. Furthermore, while the use of highly salient autobiographical details ensured a strong CIT effect, it may not reflect real-world scenarios accurately. Thus, future studies should also examine the relationship between psychopathy and CIT using less salient crime-related stimuli, for instance. Lastly, it might be more appropriate to use the Single-Probe Protocol (SPP) of the CIT, where each block detects a single piece of information pertinent to the issue under investigation. This method is often the sole feasible interviewing approach in real-life contexts . Furthermore, we would like to suggest that future examinations of psychopathy within the CIT incorporate both RT and neural measures. Notably, psychopaths exhibit distinct neural responses during tasks assessing conflict and orientation–i.e., the mechanisms assumed to underlie the RT-CT effect . As such, methods such as fMRI, capable of monitoring conflict-related neural activity [see , – ], and EEG, capable of examining the P300 component of the event-related potential associated with attentional orientation [e.g., ], hold particular promise. Integrating these neuroimaging methods would not only deepen our understanding of the RT-CIT effect but also further elucidate the neurobiological underpinnings of psychopathy, thereby advancing both fields of study. In summary, previous studies have provided scientific evidence indicating that psychopathy does not affect the physiological response-based CIT . The present study provides preliminary evidence that psychopathic tendencies similarly do not affect the response time-based CIT. This is reassuring, as it suggests that although such tendencies do not improve CIT detection efficiency, they do not impede it. To expand and confirm these findings, future research is crucial. This should include conceptual replication studies using more diverse participant samples, CIT stimuli, and alternative protocols such as the SPP. Moreover, given the theoretical insight that orientation, alongside conflict, may drive the RT-CIT effect, it is imperative to thoroughly investigate the underlying mechanisms of this effect. Such exploration will not only advance theory but also deepen our understanding of practical aspects, such as susceptibility to countermeasures and potential influences from different clinical conditions. Ultimately, these investigations will bolster the validity and practical application of the RT-CIT across diverse settings and populations. S1 Graphical abstract (TIF)
Bladder Mucosa Harvested with Holmium Laser for Treatment of Urethral Strictures: Does the Graft Have its Tissue Integrity Preserved?
1090e9aa-6e90-44cd-a792-0dcf63c85741
11884635
Surgical Procedures, Operative[mh]
The use of oral mucosa as a graft for the treatment of urethral stricture is well established, but not free from morbidity . Bladder mucosa has been utilized in various forms of urethral reconstruction, particularly in cases where other graft materials are not suitable or available . For instance, Ozgök et al. demonstrated the use of bladder mucosa grafts in urethral reconstruction for patients with penoscrotal or scrotal hypospadias, showing a complication rate of 28.6% . Similarly, Monfort et al. reported successful outcomes using bladder mucosa grafts for urethral strictures in children, with most patients achieving satisfactory results . Additionally, Garat and Villavicencio described the use of tubularized bladder mucosal grafts for posterior urethroplasty, indicating good initial results in challenging cases . More recent techniques, such as those described by Westin et al., involve the use of Holmium: YAG laser for transurethral harvesting of bladder mucosa, which has shown promising preliminary results for dorsal onlay urethroplasty . These studies collectively support the feasibility and effectiveness of bladder mucosa as a graft material in urethroplasty, particularly in complex or recurrent cases where other graft options may be limited. Studies showing the histological integrity of bladder mucosa graft removed using laser have never been done. Our hypothesis is that laser removal of the bladder mucosa preserves the tissue integrity of the graft. The aim of this study is to evaluate the integrity and the microstructural characteristics of the bladder mucosa graft harvested using a minimally invasive technique with the Holmium laser (Ho-YAG) for the treatment of urethral stricture. The study was approved according to the ethical standards of the hospital's institutional committee on experimentation with human beings (IRB number 51456521.8.0000.5259). We prospectively analyzed 11 patients, admitted to our service between November 2021 and January 2024. Inclusion criteria consisted of patients having a diagnosis of anterior urethral stenosis, with or without recurrence, and were indicated for urethroplasty with graft (strictures greater than 2.5 cm). Exclusion criteria included: genitourinary malformations, a history of pelvic radiotherapy, a history of bladder cancer, and those with an indication for staged urethroplasty. Every patient was staged using cystourethrography and uroflowmetry except in those using a suprapubic urinary diversion. All surgeries were performed by a single surgeon with experience in urethral surgery. Due to the physical characteristics of the bladder mucosa (soft and tenacious tissue), we chose to perform dorsal onlay or dorsum lateral onlay urethroplasty to avoid diverticula formation. After placing the patient in the lithotomy position, a perineal incision was made permitting access to the bulbar urethra. The next step proceeded with either the dorsal or lateral dorsum urethral dissection and following with location of the stenosis aided by a urethral catheter, longitudinal section, and measurement of the strictured urethral segment until reaching the suspected healthy proximal and distal urethral areas. A 22 or 18.5F resectoscope with a working element adapted for the laser fiber was then passed through the proximal urethrostomy followed by a urethroscopy and cystoscopy using a 0.9% saline solution as irrigation fluid . This is performed to aid in identifying possible bladder and/or urethral pathologies and anatomical landmarks for marking the graft donor region. The Holmium Laser settings for energy were 0.5 to 0.8J and frequency of 30 to 40 Hz. After filling the bladder to full capacity, a rectangular marking of the donor graft area was made immediately above the inter-ureteral bar . Dissection of the graft was then performed using the 550μm laser fiber, always going from lateral to medial and subsequently from proximal to distal, being that the deepest plane is the muscular layer of the bladder. Upon completing dissection, the graft is extracted from the bladder's interior using forceps and hemostasis then performed on the edges of the donor area and a small fragment of the graft was removed for histological analysis. The fragment of bladder mucosa was fixed in 10% buffered formalin, and routinely processed for paraffin embedding, after which 5µm thick sections were obtained at 15 µm intervals and studied by histochemical methods. The sections were stained with hematoxylin-eosin to assess the integrity of the tissue. We also performed the staining with Masson's trichrome Five sections were stained, and five fields of each section were selected. All selected fields were photographed with a digital camera (Olympus DP70, Tokyo, Japan) under the same conditions at a resolution of 2040 × 1536 pixels, directly coupled to the microscope (Olympus BX51, Tokyo, Japan) and stored in a TIFF file. We used the Image J software, version 1.46r, loaded with its own plug-in to determine tissue integrity. We can observe the demographic data and the etiology of urethral strictures of the patients studied in . The patients’ ages ranged from 31 to 70 years old (mean= 53.45). The mean of bladder graft size was 53.64mm (4 to 7 cm) and the meantime of harvesting was 47.63 minutes (75 to 25 minutes). The histological study of the bladder wall graft showed an organization in accordance with normal standards, with the presence of an intact urothelium in the bladder graft with no signs of compromise after laser removal . The bladder mucosa graft was lined by transitional epithelium (urothelium), which is composed of multiple layers of cells. The submucosal layer was preserved, joining the detrusor to the urothelium and the collagen and elastic fibers were well organized. The lamina propria lies beneath the urothelium and is composed of loose connective tissue containing blood vessels, nerves, and lymphatics and contains wispy, slender fascicles of the muscularis mucosae (MM), which can appear as individual or small groups of wavy muscle fibers . The use of a laser to collect bladder mucosa for urethroplasty is possible and was described for the first time by Joseph Memmelaar, in 1947 for the treatment of 4 patients with hypospadias. Applying the knowledge and technology of that time, the grafts were harvested using an open technique and tubularized for the repair of hypospadias in a 1-stage procedure, obtaining patency in 3 out of 4 patients after 1 year. Specifically, the Holmium: YAG (Ho:YAG) laser has been utilized for this purpose. A recent study shows a technique for transurethral harvesting of bladder mucosal grafts using the Ho:YAG laser. This technique was applied in a series of patients undergoing dorsal onlay urethroplasty for anterior urethral stricture. The results indicated that the procedure is feasible and reproducible, with comparable outcomes to other graft types used in urethroplasty . Another study by Figueiredo et al. also supports the feasibility of using the Ho:YAG laser for endoscopic harvesting of bladder mucosal grafts . This study described the successful application of this technique in a patient with a bulbar urethral stricture, further suggesting that bladder mucosal grafts harvested with the Ho:YAG laser could be a viable alternative to buccal mucosa grafts in urethral reconstruction . The use of buccal mucosa for urethroplasty has been shown to retain its histopathological characteristics after engraftment to the urethra. Soave et al. found that buccal mucosa transplants maintain their structure and are not overgrown with urothelium after being integrated into the urethra . In our study we observed the preservation of the histology of bladder mucosa during the resection of graft with laser. According to a study by Li et al., the freeze-thaw technique can maintain the structure and biological function of bladder mucosa. The study demonstrated that no significant histological changes were observed in the frozen-thawed bladder mucosa compared to fresh bladder mucosa, and the urethral epithelial cells grew well postoperatively . In our paper we studied the bladder histology with hematoxylin and eosin (H&E) and Masson's trichrome. The bladder mucosa, when stained with H&E, exhibits several distinct histological characteristics characterized by the transitional epithelium, a supportive lamina propria with variable muscle fiber patterns, and a deeper muscularis propria with more organized muscle bundles , which gives us support for the structural analysis of bladder mucosa in our study. In the study by Julio Junior et al., Masson's trichrome stain was used to quantify connective tissue and smooth muscle in the bladder structure of fetuses with Prune Belly syndrome . This demonstrates the utility of Masson's trichrome stain in analyzing the structural components of the bladder mucosa. Additionally, Paner et al. utilized Masson's trichrome stain to differentiate between muscularis propria and muscularis mucosae in the urinary bladder, further supporting its application in detailed structural analysis of bladder tissues . Thus, Masson's trichrome stain is a valuable tool for examining the structural details of the bladder mucosa, particularly in distinguishing between different tissue types such as collagen and smooth muscle. The present paper has some limitations: small sample, lack of ultra-structural analysis of bladder mucosa and longer follow-up. In conclusion, our findings suggests that the graft harvested from the bladder uroepithelium using Ho-YAG has its histological integrity preserved, which makes this technique a viable option for reconstructive surgery. However more studies are needed to establish its long-term efficacy and safety of this new technique.
Effects of Combined Transcriptome and Metabolome Analysis Training on Athletic Performance of 2-Year-Old Trot-Type Yili Horses
4676422b-30c9-4dc4-bd0e-c8ae5208ec36
11855102
Biochemistry[mh]
The growth and transformation of the modern equine industry have resulted in an increased variety and number of horse events, as well as a heightened demand for sport horses . Athletic training and husbandry management are the primary factors influencing the event results and overall performance in horses . The phenotypic and physiological changes resulting from exercise training have been extensively studied. Adaptations within the organism, driven by exercise-induced alterations in muscle loading, energy demand, and calcium fluxes, can facilitate the timely detection of abnormalities, yielding beneficial effects on cardiorespiratory, endocrine, and neurological health . There is notable variability in the internal environmental changes that occur in response to different training intensities. Endurance training enhances the body’s aerobic capacity and promotes a transition from carbohydrate to fat metabolism. Resistance training fosters protein synthesis and facilitates a slow-to-fast muscle transition rate. Additionally, neuromuscular training serves as a strategy to manage motor control, improve the motor–sensory system, increase dynamic joint stability, and reduce the risk of injury . Research has demonstrated that exercise training can regulate glucose and lipid metabolism, improve insulin sensitivity and anti-inflammatory capacity, reduce oxidative stress, stimulate muscle protein synthesis and satellite cell activation, enhance muscle fiber size, decrease body fat, and increase muscle strength, thereby improving competitive performance . Kowalik et al. found that plasma concentrations of muscle growth inhibitors increased in 20 Arabian horses after 8 months of endurance training . Fernandez et al. found that tennis players who participated in five weeks of neuromuscular training exhibited significantly improved sprint speed and movement sensitivity during the later stages of training compared to the earlier stages . Transcriptomics is a powerful tool for identifying gene expression signatures . Metabolomics serves as a valuable method for identifying metabolites and elucidating changes in metabolite levels within biological systems under varying conditions . Concurrently, high-throughput sequencing technology has been employed to identify genes with significant phenotypic effects, enabling the development of training programs tailored to an animal’s susceptibility . Studies indicate that repetitive exercise training induces the emergence of new gene expression in resting muscles, likely reflecting the organism’s capacity to adapt to training and influence exercise-related phenotypes . Genes relevant to specific biological contexts can be identified by comparing gene expression across different states, such as before and after a particular exercise, or among various tissues , including blood versus muscle , as well as across different breeds and hybrids . Hou et al. observed significant reductions in the levels of alanine, aspartate, glutamate, and pantothenic acid, along with the down-regulation of genes such as ENTPD3 , ENTPD31 , and CMPK2 , following 8 weeks of countercurrent swim training in zebrafish . This analysis was conducted using a combination of transcriptomics and metabolomics. In a related study, Isung et al. found that high-intensity exercise leads to an increase in glutamate levels in skeletal muscle, which subsequently promotes the release of alanine to facilitate ammonia metabolism . Training plays a vital role in enhancing overall musculoskeletal health and athletic performance in sport horses . Previous studies have employed transcriptomics to investigate the impact of training on equine athletic performance ; however, relying on a single technique offers a limited perspective and fails to provide a comprehensive understanding of how training influences athletic performance. Joint multi-omics analysis serves as an integrative approach that combines various data types to illuminate the dynamics of an organism’s internal environment from multiple angles . Yili horses, recognized as the first breed independently developed in China, exhibit notable traits such as exceptional adaptability and disease resistance. Consequently, this study focuses on trot-type Yili horses, implementing a 12-week specialized training program. We investigated the impact of training on the horses’ athletic performance at the molecular level using transcriptomic and metabolomic technologies, ultimately identifying new genetic markers associated with athletic performance. This research aims to provide a theoretical foundation for the development of more professional and effective conditioning and training programs. 2.1. Test Animals and Training Programmes The trial involved 12 two-year-old trot-type Yili horses from Zhaosu Horses Farm in Xinjiang, which underwent a 12-week training program . Whole blood and serum samples were collected from all 12 horses under resting conditions during the before-training and after-training periods. Weekly test races were organized, and the four horses that consistently ranked in the top four positions across all races were selected based on their athletic performance. Transcriptomic and metabolomic sequencing was subsequently performed on the whole blood and serum samples from these four horses. 2.2. Sample Collection Use the Finish Timing System to record the horse’s race results. The blood of horses at rest during the BT and AT periods was collected and rapidly placed in liquid nitrogen, and subsequently placed in a −80 °C refrigerator for cryopreservation. 2.3. Transcriptome Analysis 2.3.1. RNA Isolation, Library Preparation and Sequencing Total RNA was extracted from the blood of horses in both the BT and AT groups using the TRIzol extraction kit (Thermo Fisher Scientific, Waltham, MA, USA), with four biological replicates included for each group. Pairwise end sequencing was performed on the Illumina NovaSeq 6000 (Illumina, San Diego, CA, USA), which involved the removal of reads with junctions, reads containing unidentifiable base information, and low-quality reads to ensure the acquisition of high-quality clean reads. The values of clean reads were compared against a known horse reference genome ( https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/002/863/925/GCF_002863925.1_EquCab3.0/ , accessed on 14 June 2024) to obtain information regarding the positioning of reads on the horse reference genome. 2.3.2. Differentially Expressed Genes Analysis Differentially expressed genes (DEGs) were identified among the various control groups using the DESeq2 package in R (version 4.4.1), which is based on the negative binomial distribution. Genes were considered differentially expressed if they met the criteria of |log 2 (Fold Change)| ≥ 1 and a p -value < 0.05. 2.3.3. Functional Enrichment Analysis Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGGs) pathway enrichment analyses were conducted utilizing the NovoMagic cloud platform ( https://magic.novogene.com/customer/main#/loginNew , accessed on 28 August 2024). A significance threshold of p < 0.05 was employed to identify crucial functional pathways. 2.3.4. Protein Network Interactions Analysis Mapping of identified DEGs to the Search Tool for the Retrieval of Interacting Genes (STRINGs) database ( http://string-db.org/ , accessed on 12 September 2024). PPI networks were built using Cytoscape software (version 3.10.0) and core genes were identified using the MCC model in the CytoHubba plugin. 2.4. Metabolome Analysis Metabolites were extracted from the plasma of horses in the BT and AT, with four biological replicates in each group. All samples were thawed at room temperature, 100 µL of thawed plasma samples was transferred to EP tubes with 400 µL of 80% methanol, and 1 mL of samples was lyophilized with 100 µL of 80% aqueous methanol; vortexed and shaken, and left to stand on an ice bath for 5 min, and then centrifuged at 15,000× g for 15 min at 4 °C. A certain amount of supernatant was diluted with mass spectrometry grade water to 53% methanol, centrifuged at 15,000× g and 4 °C for 15 min, and then the supernatant was collected and injected into LC-MS for analysis. The default criteria for differential metabolite screening were VIP > 1, p -value < 0.05, and FC ≥ 1.5 or FC ≤ 0.667. 2.5. Combined Transcriptome and Metabolome Analysis A joint analysis of the transcriptome and metabolome revealed overlapping pathways between these two biological domains. Differentially expressed genes and metabolites were screened from the transcriptome and metabolome and subsequently analyzed together using the Pearson correlation analysis method. Visual heat maps and networks were generated to illustrate the findings. 2.6. Statistical Analysis The results of the competition were analyzed using the one-way analysis of variance (ANOVA) method in SPSS 26.0 software and the results are expressed as the mean ± standard deviation. Differences were judged at p < 0.05 and p < 0.01. The trial involved 12 two-year-old trot-type Yili horses from Zhaosu Horses Farm in Xinjiang, which underwent a 12-week training program . Whole blood and serum samples were collected from all 12 horses under resting conditions during the before-training and after-training periods. Weekly test races were organized, and the four horses that consistently ranked in the top four positions across all races were selected based on their athletic performance. Transcriptomic and metabolomic sequencing was subsequently performed on the whole blood and serum samples from these four horses. Use the Finish Timing System to record the horse’s race results. The blood of horses at rest during the BT and AT periods was collected and rapidly placed in liquid nitrogen, and subsequently placed in a −80 °C refrigerator for cryopreservation. 2.3.1. RNA Isolation, Library Preparation and Sequencing Total RNA was extracted from the blood of horses in both the BT and AT groups using the TRIzol extraction kit (Thermo Fisher Scientific, Waltham, MA, USA), with four biological replicates included for each group. Pairwise end sequencing was performed on the Illumina NovaSeq 6000 (Illumina, San Diego, CA, USA), which involved the removal of reads with junctions, reads containing unidentifiable base information, and low-quality reads to ensure the acquisition of high-quality clean reads. The values of clean reads were compared against a known horse reference genome ( https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/002/863/925/GCF_002863925.1_EquCab3.0/ , accessed on 14 June 2024) to obtain information regarding the positioning of reads on the horse reference genome. 2.3.2. Differentially Expressed Genes Analysis Differentially expressed genes (DEGs) were identified among the various control groups using the DESeq2 package in R (version 4.4.1), which is based on the negative binomial distribution. Genes were considered differentially expressed if they met the criteria of |log 2 (Fold Change)| ≥ 1 and a p -value < 0.05. 2.3.3. Functional Enrichment Analysis Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGGs) pathway enrichment analyses were conducted utilizing the NovoMagic cloud platform ( https://magic.novogene.com/customer/main#/loginNew , accessed on 28 August 2024). A significance threshold of p < 0.05 was employed to identify crucial functional pathways. 2.3.4. Protein Network Interactions Analysis Mapping of identified DEGs to the Search Tool for the Retrieval of Interacting Genes (STRINGs) database ( http://string-db.org/ , accessed on 12 September 2024). PPI networks were built using Cytoscape software (version 3.10.0) and core genes were identified using the MCC model in the CytoHubba plugin. Total RNA was extracted from the blood of horses in both the BT and AT groups using the TRIzol extraction kit (Thermo Fisher Scientific, Waltham, MA, USA), with four biological replicates included for each group. Pairwise end sequencing was performed on the Illumina NovaSeq 6000 (Illumina, San Diego, CA, USA), which involved the removal of reads with junctions, reads containing unidentifiable base information, and low-quality reads to ensure the acquisition of high-quality clean reads. The values of clean reads were compared against a known horse reference genome ( https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/002/863/925/GCF_002863925.1_EquCab3.0/ , accessed on 14 June 2024) to obtain information regarding the positioning of reads on the horse reference genome. Differentially expressed genes (DEGs) were identified among the various control groups using the DESeq2 package in R (version 4.4.1), which is based on the negative binomial distribution. Genes were considered differentially expressed if they met the criteria of |log 2 (Fold Change)| ≥ 1 and a p -value < 0.05. Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGGs) pathway enrichment analyses were conducted utilizing the NovoMagic cloud platform ( https://magic.novogene.com/customer/main#/loginNew , accessed on 28 August 2024). A significance threshold of p < 0.05 was employed to identify crucial functional pathways. Mapping of identified DEGs to the Search Tool for the Retrieval of Interacting Genes (STRINGs) database ( http://string-db.org/ , accessed on 12 September 2024). PPI networks were built using Cytoscape software (version 3.10.0) and core genes were identified using the MCC model in the CytoHubba plugin. Metabolites were extracted from the plasma of horses in the BT and AT, with four biological replicates in each group. All samples were thawed at room temperature, 100 µL of thawed plasma samples was transferred to EP tubes with 400 µL of 80% methanol, and 1 mL of samples was lyophilized with 100 µL of 80% aqueous methanol; vortexed and shaken, and left to stand on an ice bath for 5 min, and then centrifuged at 15,000× g for 15 min at 4 °C. A certain amount of supernatant was diluted with mass spectrometry grade water to 53% methanol, centrifuged at 15,000× g and 4 °C for 15 min, and then the supernatant was collected and injected into LC-MS for analysis. The default criteria for differential metabolite screening were VIP > 1, p -value < 0.05, and FC ≥ 1.5 or FC ≤ 0.667. A joint analysis of the transcriptome and metabolome revealed overlapping pathways between these two biological domains. Differentially expressed genes and metabolites were screened from the transcriptome and metabolome and subsequently analyzed together using the Pearson correlation analysis method. Visual heat maps and networks were generated to illustrate the findings. The results of the competition were analyzed using the one-way analysis of variance (ANOVA) method in SPSS 26.0 software and the results are expressed as the mean ± standard deviation. Differences were judged at p < 0.05 and p < 0.01. 3.1. Changes in Race Performance of Horses at Different Stages of Training The horses’ race times were highly significantly lower in the AT than in the BT ( p < 0.01) periods . The horses’ race times were significantly lower in the mid-training period than in the BT ( p < 0.05) period . 3.2. Transcriptome Results Analysis 3.2.1. Transcriptome Quality Control Analysis The raw data of the blood transcriptome of horses in the BT and AT groups were 192,181,348 and 167,025,522, respectively, and 46,730,816 and 73,255,360 high-quality and valid data were obtained after quality control filtering of the raw data . The percentage of Q20 bases was above 96.49%, the rate of Q30 bases was above 92.77%, and the GC content ranged from 51.34% to 58.40% . When clean reads were compared to the horse’s reference genome, the average comparison efficiency for the eight samples was 71.37% . Due to the specificity of the blood samples, the sequencing data were largely compliant and met the requirements for subsequent analyses. 3.2.2. Differential Genes Analysis A total of 57 DEGs, including 33 up-regulated and 24 down-regulated genes, were identified in the blood of horses in the AT versus BT groups ( A,B). FOS , PRF1 , CD3E , CCL5 , HSD17B1 , and TMPRSS6 were significantly up-regulated in the AT group compared to the BT group ( p < 0.05), and C1QTNF12 , GATA1 , CCR3 , and ND5 were significantly down-regulated compared to the BT group ( p < 0.05) . 3.2.3. Transcriptome Pathway Enrichment Analysis GO functional enrichment analyses of DEGs in the blood from horses in AT versus BT groups were analyzed, and the top 10 GO term entries in each GO category were selected for display. Up-regulated genes were enriched in 105 GO entries, of which they were significantly enriched in 34 GO entries ( p < 0.05), mainly in GO entries related to G protein-coupled receptor binding, chemokine activity, heme binding, serine-type peptidase activity, and proteolysis ( A). Down-regulated genes were enriched in 50 GO entries, of which they were significantly enriched in 7 GO entries ( p < 0.05), mainly in GO entries for the aminoglycan metabolic process, sulfur compound metabolic process, G-protein-coupled receptor activity, and DNA polymerase activity ( B). In KEGG pathway analysis, up-regulated genes were enriched in 101 pathways, of which they were significantly enriched in 24 pathways ( p < 0.05). Twelve pathways were screened for the possible regulation of equine athletic performance, including apoptosis, the dopaminergic synapse, circadian entrainment, steroid hormone biosynthesis, the cholinergic synapse, the relaxin signaling pathway, type I diabetes mellitus, the GABAergic synapse, the hemokine signaling pathway, the glutamatergic synapse, ribosome, and the serotonergic synapse ( C). Down-regulated genes were enriched in 21 KEGG pathways, with significant enrichment in 3 KEGG pathways ( p < 0.05). Screening was conducted of 10 pathways, including non-homologous end-joining, cytokine-cytokine receptor interaction, mismatch repair, RNA degradation, retrograde endocannabinoid signaling, oxidative phosphorylation, the chemokine signaling pathway, diabetic cardiomyopathy, amyotrophic lateral sclerosis, and the pathways of neurodegeneration-multiple diseases, which may modulate equine athletic performance ( D). 3.2.4. Protein Network Interactions Results Analysis The protein-protein interaction (PPI) network was constructed by integrating differentially expressed genes (DEGs) with protein interaction data from the STRING database. Remove the disconnected nodes and keep the network with the most nodes ( A). A total of 10 core genes, including CCL5 , FOS , CD3E , HSD17B1 , CCR3 , GATA1 , TMPRSS6 , CLEC1A , PRF1 , and ND5 , which may be significantly associated with the athletic performance of the trot-type Yili horses, were screened by the MCC model in Cytoscape software CytoHubba ( B). 3.3. Metabolome Results Analysis 3.3.1. Metabolome Quality Control Analysis To ensure the stability of the overall test, PCA and correlation analyses were performed on the test samples. PC1 and PC2 accounted for 33.26 percent and 15.83 percent of the total variation ( A). To better distinguish the differences between groups, PLS-DA analysis was performed on top of PCA analysis, which showed significant differences between groups ( B,C). The correlation between the samples tends to be closer to 1.00 ( D). The above results showed that there was a significant difference in the plasma of horses in the AT and in the BT groups. 3.3.2. Differential Metabolites Analysis A total of 121 differential metabolites were screened in AT versus BT plasma, of which 78 differential metabolites were up-regulated and 43 differential metabolites were down-regulated in expression ( A,B). Differential metabolites that may be associated with equine athletic performance were screened, with dehydroepiandrosterone (DHEA), cis-aconitic acid, and carnosine all being significantly higher than in the BT group ( p < 0.05), and pentadecanoic acid, asparagine, androsterone, and ergothioneine being significantly lower than in the BT group ( p < 0.05) . 3.3.3. Metabolome Pathway Enrichment Analysis The differential metabolites were analyzed for KEGG functional annotation and pathway enrichment, as shown in this experiment. Differential metabolites were enriched in a total of 38 metabolic pathways, of which 3 differential metabolites were highly significantly enriched in the histidine metabolism ( p < 0.01). Thirteen pathways potentially related to athletic performance were screened, including histidine metabolism, beta-alanine metabolism, renin secretion, the FoxO signaling pathway, the mTOR signaling pathway, the PI3K-Akt signaling pathway, the AMPK signaling pathway, platelet activation, circadian entrainment, the citrate cycle (TCA cycle), arachidonic acid metabolism, aldosterone synthesis and secretion, and purine metabolism . 3.4. Association Analysis Between Transcriptomic and Metabolomic Data To investigate the association between differential genes and differential metabolites in AT and BT blood, the KEGG enrichment pathway was integrated. The Wayne diagram results showed that there were five shared KEGG pathways for the differential genes and differential metabolites, namely circadian entrainment, the serotonergic synapse, the PI3K-Akt signaling pathway, the oxytocin signaling pathway, and the cAMP signaling pathway ( A). Correlation tests were conducted for 11 differentially expressed genes and 20 differential metabolites associated with athletic performance based on analyses of transcriptomic and metabolomic KEGG-enriched pathways. By Pearson correlation analysis of 11 differential genes and 20 differential metabolites, a total of 8 genes were significantly positively or negatively correlated with one or more of the 19 metabolites ( B,C). The horses’ race times were highly significantly lower in the AT than in the BT ( p < 0.01) periods . The horses’ race times were significantly lower in the mid-training period than in the BT ( p < 0.05) period . 3.2.1. Transcriptome Quality Control Analysis The raw data of the blood transcriptome of horses in the BT and AT groups were 192,181,348 and 167,025,522, respectively, and 46,730,816 and 73,255,360 high-quality and valid data were obtained after quality control filtering of the raw data . The percentage of Q20 bases was above 96.49%, the rate of Q30 bases was above 92.77%, and the GC content ranged from 51.34% to 58.40% . When clean reads were compared to the horse’s reference genome, the average comparison efficiency for the eight samples was 71.37% . Due to the specificity of the blood samples, the sequencing data were largely compliant and met the requirements for subsequent analyses. 3.2.2. Differential Genes Analysis A total of 57 DEGs, including 33 up-regulated and 24 down-regulated genes, were identified in the blood of horses in the AT versus BT groups ( A,B). FOS , PRF1 , CD3E , CCL5 , HSD17B1 , and TMPRSS6 were significantly up-regulated in the AT group compared to the BT group ( p < 0.05), and C1QTNF12 , GATA1 , CCR3 , and ND5 were significantly down-regulated compared to the BT group ( p < 0.05) . 3.2.3. Transcriptome Pathway Enrichment Analysis GO functional enrichment analyses of DEGs in the blood from horses in AT versus BT groups were analyzed, and the top 10 GO term entries in each GO category were selected for display. Up-regulated genes were enriched in 105 GO entries, of which they were significantly enriched in 34 GO entries ( p < 0.05), mainly in GO entries related to G protein-coupled receptor binding, chemokine activity, heme binding, serine-type peptidase activity, and proteolysis ( A). Down-regulated genes were enriched in 50 GO entries, of which they were significantly enriched in 7 GO entries ( p < 0.05), mainly in GO entries for the aminoglycan metabolic process, sulfur compound metabolic process, G-protein-coupled receptor activity, and DNA polymerase activity ( B). In KEGG pathway analysis, up-regulated genes were enriched in 101 pathways, of which they were significantly enriched in 24 pathways ( p < 0.05). Twelve pathways were screened for the possible regulation of equine athletic performance, including apoptosis, the dopaminergic synapse, circadian entrainment, steroid hormone biosynthesis, the cholinergic synapse, the relaxin signaling pathway, type I diabetes mellitus, the GABAergic synapse, the hemokine signaling pathway, the glutamatergic synapse, ribosome, and the serotonergic synapse ( C). Down-regulated genes were enriched in 21 KEGG pathways, with significant enrichment in 3 KEGG pathways ( p < 0.05). Screening was conducted of 10 pathways, including non-homologous end-joining, cytokine-cytokine receptor interaction, mismatch repair, RNA degradation, retrograde endocannabinoid signaling, oxidative phosphorylation, the chemokine signaling pathway, diabetic cardiomyopathy, amyotrophic lateral sclerosis, and the pathways of neurodegeneration-multiple diseases, which may modulate equine athletic performance ( D). 3.2.4. Protein Network Interactions Results Analysis The protein-protein interaction (PPI) network was constructed by integrating differentially expressed genes (DEGs) with protein interaction data from the STRING database. Remove the disconnected nodes and keep the network with the most nodes ( A). A total of 10 core genes, including CCL5 , FOS , CD3E , HSD17B1 , CCR3 , GATA1 , TMPRSS6 , CLEC1A , PRF1 , and ND5 , which may be significantly associated with the athletic performance of the trot-type Yili horses, were screened by the MCC model in Cytoscape software CytoHubba ( B). The raw data of the blood transcriptome of horses in the BT and AT groups were 192,181,348 and 167,025,522, respectively, and 46,730,816 and 73,255,360 high-quality and valid data were obtained after quality control filtering of the raw data . The percentage of Q20 bases was above 96.49%, the rate of Q30 bases was above 92.77%, and the GC content ranged from 51.34% to 58.40% . When clean reads were compared to the horse’s reference genome, the average comparison efficiency for the eight samples was 71.37% . Due to the specificity of the blood samples, the sequencing data were largely compliant and met the requirements for subsequent analyses. A total of 57 DEGs, including 33 up-regulated and 24 down-regulated genes, were identified in the blood of horses in the AT versus BT groups ( A,B). FOS , PRF1 , CD3E , CCL5 , HSD17B1 , and TMPRSS6 were significantly up-regulated in the AT group compared to the BT group ( p < 0.05), and C1QTNF12 , GATA1 , CCR3 , and ND5 were significantly down-regulated compared to the BT group ( p < 0.05) . GO functional enrichment analyses of DEGs in the blood from horses in AT versus BT groups were analyzed, and the top 10 GO term entries in each GO category were selected for display. Up-regulated genes were enriched in 105 GO entries, of which they were significantly enriched in 34 GO entries ( p < 0.05), mainly in GO entries related to G protein-coupled receptor binding, chemokine activity, heme binding, serine-type peptidase activity, and proteolysis ( A). Down-regulated genes were enriched in 50 GO entries, of which they were significantly enriched in 7 GO entries ( p < 0.05), mainly in GO entries for the aminoglycan metabolic process, sulfur compound metabolic process, G-protein-coupled receptor activity, and DNA polymerase activity ( B). In KEGG pathway analysis, up-regulated genes were enriched in 101 pathways, of which they were significantly enriched in 24 pathways ( p < 0.05). Twelve pathways were screened for the possible regulation of equine athletic performance, including apoptosis, the dopaminergic synapse, circadian entrainment, steroid hormone biosynthesis, the cholinergic synapse, the relaxin signaling pathway, type I diabetes mellitus, the GABAergic synapse, the hemokine signaling pathway, the glutamatergic synapse, ribosome, and the serotonergic synapse ( C). Down-regulated genes were enriched in 21 KEGG pathways, with significant enrichment in 3 KEGG pathways ( p < 0.05). Screening was conducted of 10 pathways, including non-homologous end-joining, cytokine-cytokine receptor interaction, mismatch repair, RNA degradation, retrograde endocannabinoid signaling, oxidative phosphorylation, the chemokine signaling pathway, diabetic cardiomyopathy, amyotrophic lateral sclerosis, and the pathways of neurodegeneration-multiple diseases, which may modulate equine athletic performance ( D). The protein-protein interaction (PPI) network was constructed by integrating differentially expressed genes (DEGs) with protein interaction data from the STRING database. Remove the disconnected nodes and keep the network with the most nodes ( A). A total of 10 core genes, including CCL5 , FOS , CD3E , HSD17B1 , CCR3 , GATA1 , TMPRSS6 , CLEC1A , PRF1 , and ND5 , which may be significantly associated with the athletic performance of the trot-type Yili horses, were screened by the MCC model in Cytoscape software CytoHubba ( B). 3.3.1. Metabolome Quality Control Analysis To ensure the stability of the overall test, PCA and correlation analyses were performed on the test samples. PC1 and PC2 accounted for 33.26 percent and 15.83 percent of the total variation ( A). To better distinguish the differences between groups, PLS-DA analysis was performed on top of PCA analysis, which showed significant differences between groups ( B,C). The correlation between the samples tends to be closer to 1.00 ( D). The above results showed that there was a significant difference in the plasma of horses in the AT and in the BT groups. 3.3.2. Differential Metabolites Analysis A total of 121 differential metabolites were screened in AT versus BT plasma, of which 78 differential metabolites were up-regulated and 43 differential metabolites were down-regulated in expression ( A,B). Differential metabolites that may be associated with equine athletic performance were screened, with dehydroepiandrosterone (DHEA), cis-aconitic acid, and carnosine all being significantly higher than in the BT group ( p < 0.05), and pentadecanoic acid, asparagine, androsterone, and ergothioneine being significantly lower than in the BT group ( p < 0.05) . 3.3.3. Metabolome Pathway Enrichment Analysis The differential metabolites were analyzed for KEGG functional annotation and pathway enrichment, as shown in this experiment. Differential metabolites were enriched in a total of 38 metabolic pathways, of which 3 differential metabolites were highly significantly enriched in the histidine metabolism ( p < 0.01). Thirteen pathways potentially related to athletic performance were screened, including histidine metabolism, beta-alanine metabolism, renin secretion, the FoxO signaling pathway, the mTOR signaling pathway, the PI3K-Akt signaling pathway, the AMPK signaling pathway, platelet activation, circadian entrainment, the citrate cycle (TCA cycle), arachidonic acid metabolism, aldosterone synthesis and secretion, and purine metabolism . To ensure the stability of the overall test, PCA and correlation analyses were performed on the test samples. PC1 and PC2 accounted for 33.26 percent and 15.83 percent of the total variation ( A). To better distinguish the differences between groups, PLS-DA analysis was performed on top of PCA analysis, which showed significant differences between groups ( B,C). The correlation between the samples tends to be closer to 1.00 ( D). The above results showed that there was a significant difference in the plasma of horses in the AT and in the BT groups. A total of 121 differential metabolites were screened in AT versus BT plasma, of which 78 differential metabolites were up-regulated and 43 differential metabolites were down-regulated in expression ( A,B). Differential metabolites that may be associated with equine athletic performance were screened, with dehydroepiandrosterone (DHEA), cis-aconitic acid, and carnosine all being significantly higher than in the BT group ( p < 0.05), and pentadecanoic acid, asparagine, androsterone, and ergothioneine being significantly lower than in the BT group ( p < 0.05) . The differential metabolites were analyzed for KEGG functional annotation and pathway enrichment, as shown in this experiment. Differential metabolites were enriched in a total of 38 metabolic pathways, of which 3 differential metabolites were highly significantly enriched in the histidine metabolism ( p < 0.01). Thirteen pathways potentially related to athletic performance were screened, including histidine metabolism, beta-alanine metabolism, renin secretion, the FoxO signaling pathway, the mTOR signaling pathway, the PI3K-Akt signaling pathway, the AMPK signaling pathway, platelet activation, circadian entrainment, the citrate cycle (TCA cycle), arachidonic acid metabolism, aldosterone synthesis and secretion, and purine metabolism . To investigate the association between differential genes and differential metabolites in AT and BT blood, the KEGG enrichment pathway was integrated. The Wayne diagram results showed that there were five shared KEGG pathways for the differential genes and differential metabolites, namely circadian entrainment, the serotonergic synapse, the PI3K-Akt signaling pathway, the oxytocin signaling pathway, and the cAMP signaling pathway ( A). Correlation tests were conducted for 11 differentially expressed genes and 20 differential metabolites associated with athletic performance based on analyses of transcriptomic and metabolomic KEGG-enriched pathways. By Pearson correlation analysis of 11 differential genes and 20 differential metabolites, a total of 8 genes were significantly positively or negatively correlated with one or more of the 19 metabolites ( B,C). Regular long-term exercise training can induce a variety of adaptive responses in the body, which not only enhance glycolysis and fatty acid metabolism but also increase the sensitivity of the central nervous system and the pancreatic islet system, thereby strengthening the body’s exercise capacity . Integrating transcriptomics and metabolomics provides a more precise understanding of the molecular regulatory mechanisms that condition athletic performance in trot-type Yili horses, thereby reflecting their physiological state. Members of the FOS family are implicated not only in the pathogenesis of various diseases but also serve as reliable markers of neural activity and play crucial roles in the maintenance of skeletal cellular development . In the present study, FOS was found to be up-regulated in AT, suggesting that FOS promotes the development of the horse’s brain and regulates the central nervous system. This up-regulation may lead to the increased excitability of the central nervous system, enhanced motor performance, and improved development of skeletal cells. Additionally, CCL5 is known to regulate the movement of memory T lymphocytes, monocytes, macrophages, and eosinophils . Related studies have indicated that CCL5 is significant in promoting synaptic growth and memory formation, and it plays a role in central nervous system disorders, particularly those associated with neuroinflammatory processes . In their study of related genes in mice, Szalay et al. found that CCL5 up-regulated the expression of IL-10 in both vascular smooth muscle cells and the brain. They demonstrated that IL-10 promotes the differentiation of type 2 microglia and prevents the over-activation of pathological microglia, indicating a protective role for CCL5 in the context of neuronal injury . In the present study, CCL5 expression was significantly up-regulated in AT, suggesting that CCL5 activates eosinophil activity, maintains the body’s acid-base balance, and reduces the incidence of acid-base toxicity in horses. It is hypothesized that CCL5 may play a role in regulating the homeostasis of the horse’s internal environment, thereby enhancing its athletic performance. Reißmann et al. sequenced the transcriptome of Kabardino horse blood following long-distance (70 km) and short-distance (15 km) endurance running. Their findings revealed that the pathways and genes associated with the activation of the inflammatory system, carbohydrate catabolic processes, lipid biosynthesis, NADP metabolic processes, as well as specific genes such as ACOD1 , CCL5 , CD40LG , FOS , and IL1R2 , can be utilized to monitor equine exercise performance . These results are consistent with those of previous studies. Studies have shown that the PI3K-Akt, mTOR, and FoxO signaling pathways play central regulatory roles in carbohydrate, lipid, and protein metabolism by coordinately modulating biological processes such as apoptosis, energy metabolism, and oxidative stress . Among these, mTOR integrates nutritional signals (e.g., amino acid availability), energy signals (ATP/AMP ratio), and growth factors (e.g., insulin/IGF-1) through its complexes (mTORC1/mTORC2) to regulate critical metabolic processes, including protein synthesis (e.g., via p70S6K activation), lipid metabolism, and mitochondrial biogenesis . McGivney et al. identified through transcriptomic analysis that in thoroughbred horses exercised on a treadmill to maximum heart rate, mTOR signaling-related genes ( 4EBP1 , TSC2 , VEGF ) were up-regulated in skeletal muscle 4 h post-exercise, demonstrating that exercise-induced stress enhances metabolic adaptation through the time-dependent modulation of the mTOR network . Takegaki et al. further revealed that three sessions of resistance exercise in 18 male mice significantly activated the phosphorylation of the mTOR signaling marker p70S6K in skeletal muscle, confirming the cross-species conservation of the mTOR pathway function in exercise-mediated anabolism . Our findings indicate that differentially expressed genes and metabolites were co-enriched in the PI3K-Akt signaling pathway, while differential metabolites were specifically enriched in the PI3K-Akt, mTOR, and FoxO signaling pathways. This suggests that during the experimental conditions, these pathways collectively regulate exercise performance outcomes by integrating growth factor signaling to modulate substrate metabolism, sensing amino acid and energy status for a dynamic balance of anabolism/catabolism, and activating lipolysis and antioxidant gene expression. C1QTNF12 , a member of the C1QTNF family, plays a crucial role in glucose metabolism within the liver and adipose tissue by promoting glucose uptake in adipocytes and inhibiting de novo glucose production in hepatocytes through the PI3K-Akt signaling pathway. Additionally, C1QTNF12 is involved in regulating inflammation, vascular remodeling, and cardiac fibrosis, which may contribute significantly to cardiovascular injury . In the present study, the significant down-regulation of C1QTNF12 in AT aligns with previous findings . This suggests that C1QTNF12 may mitigate inflammation in AT, enhance hormonal activity, regulate gluconeogenesis, and store substantial energy for the organism, ultimately leading to improved athletic performance. Steroid hormones, classified as fat-soluble hormones, are primarily divided into two main groups: sex hormones and adrenocorticotropic hormones. These hormones are typically synthesized in various tissues, including the adrenal cortex, gonads (testes and ovaries), brain, placenta, and adipose tissue . The adrenocorticotropic hormone, secreted by the adrenal cortex, plays a crucial role in regulating glucose metabolism. It not only inhibits sugar oxidation, which raises blood glucose levels, but also promotes the conversion of protein into sugar . Additionally, it facilitates the retention of sodium ions while promoting the excretion of excess potassium ions, thereby regulating water and salt metabolism . The secretion of sex hormones is controlled by gonadotropins originating from the pituitary gland. In the results of this study, the steroid hormone synthesis pathway emerged as significantly enriched in the transcriptome and metabolome concerning the effects of training on equine energy transport performance. Notable differential genes and metabolites, including HSD17B1 , testosterone, and dehydroepiandrosterone, were enriched within this pathway. HSD17B1 is biologically significant as it catalyzes the conversion of androstenedione to testosterone, facilitates the reduction of DHEA to androstenediol, and metabolizes dihydrotestosterone, the most potent androgen, into 3β-diol and 3α-diol . In the present study, we observed that the expression of HSD17B1 was up-regulated in AT, indicating that training enhances the in vivo function of the HSD17B1 , which leads to the increased secretion of steroid hormones, including testosterone, in the organism. Testosterone is a crucial anabolic steroid hormone that plays a significant role in the growth and maintenance of skeletal muscle, enzyme proteins, bone, and red blood cells, and it also contributes to neurological functions . Relevant studies have demonstrated that testosterone can directly influence androgen receptors on osteoblasts and osteoclasts, resulting in increased trabecular bone formation, the inhibition of osteoclast activity, reduced bone resorption, and concurrently enhanced muscle strength and bone mineral density. Additionally, it promotes oxidative muscle metabolism, thereby improving athletic performance . Hackney et al. found that prolonged endurance exercise induces a biphasic response in testosterone levels, characterized by an immediate increase post-exercise followed by a subsequent decrease during the recovery phase . Our findings revealed an increase in blood testosterone levels after training compared to before training levels; however, testosterone levels during the recovery phase were not assessed, indicating a need for further investigation. DHEA, primarily synthesized by the adrenal cortex and gonads , has several biological roles, including improving glucose tolerance, increasing insulin levels, exerting antidiabetic effects, enhancing endocrine system activity, lowering cortisol levels, restoring impaired immune responses, participating in the synthesis of various adrenal hormones, and enhancing T cell and B cell immune functions . Relevant studies have demonstrated that DHEA reverses vascular remodeling, enhances vascular endothelial cell function, and reduces oxidative stress in the body . In the present study, an increase in blood levels of DHEA was observed in horses after exercise compared to before exercise. This could be due to DHEA improving tolerance and enhancing athletic performance by increasing insulin levels and exerting anti-diabetic effects in the body . Histidine is an essential amino acid for humans, mammals, fish, and poultry. As a functional amino acid, it exerts specific metabolic effects beyond its role in protein metabolism . Histidine is involved in various metabolic pathways; it can be methylated to form 1-methyl or 3-methylhistidine, converted to imidazole-pyruvic acid by transaminases, condensed with β-alanine to produce carnosine and goitrogens, or decarboxylated to generate histamine . In the present study, histidine metabolism was the pathway significantly enriched in the metabolomic analysis of the effects of training on equine energy transport performance, with differential metabolites such as myostatin, ergothioneine, and 1-methylhistidine being enriched in this pathway. Ergothioneine, a natural antioxidant and nutritional supplement synthesized from histidine, demonstrates biological functions including scavenging free radicals, mediating anti-inflammatory responses, and reducing oxidative stress . Studies have demonstrated that antioxidants positively influence exercise performance and the body’s response to oxidative stress . Fovet et al. found that after two hours of exercise at a consistent maximal aerobic rate, mice supplemented with ergothioneine exhibited enhanced protein synthesis and satellite cell activation, along with reduced metabolic stress, inflammatory markers, and indicators of oxidative damage . Ergothioneine significantly improves aerobic performance, extends the time to post-exercise fatigue, and enhances muscle recovery. In the present study, there was a decrease in blood ergothioneine concentration in horses in AT. This reduction may be attributed to the adaptation of horses to the training intensity over time, resulting in a diminished physiological response to metabolic stress and consequently a lower secretion of ergothioneine. Further investigation is required to elucidate the specific mechanisms involved. Myostatin (β-alanyl-L-histidine), composed primarily of the amino acids L-histidine and β-alanine, is predominantly found in skeletal muscle . The content of myostatin in skeletal muscle is largely influenced by factors such as age, gender, diet, muscle fiber type, and training intensity . Suzuki et al. demonstrated that men with elevated myostatin levels exhibited greater strength output during the latter stages of the 30 s Wingate test . Similarly, Baguet et al. found that rowers with higher myostatin levels achieved faster segment times in the second and third 500 m segments of the 2000 m race . In the present study, there is a significant increase in the concentration of myostatin in the blood after training. Research has shown that myostatin may enhance exercise performance by mitigating fatigue, delaying acidosis induced by muscle contraction, and augmenting skeletal muscle force production during high-intensity exercise . Additionally, purine metabolism plays a crucial role in various cellular processes, including energy storage, nucleic acid and coenzyme synthesis, translation, and signaling within the body. It serves as a metabolic pathway for both the synthesis and catabolism of purines . Purines consist of three primary organic molecules: adenine derivatives (ATP, ADP, AMP, cAMP, NAD, adenosine), guanine derivatives (GTP, GDP, GMP, cGMP, guanosine), and related metabolites (hypoxanthine, xanthine, and uric acid) . Research has demonstrated that the concentrations and effects of cGMP and cAMP in cells are antagonistic. For instance, elevated intracellular cAMP levels lead to the breakdown of glycogen into glucose, whereas increased cGMP levels promote the synthesis of glucose into glycogen . Gaitán et al. showed that metabolomic analyses of athletes engaged in high-intensity exercise resulted in the up-regulation of lactate and adenine catabolite expression in plasma, alongside enhanced anaerobic metabolism and ATP cycling . In the present study, a significant increase in cGMP concentration and a very significant decrease in 3′-adenylate concentration were observed in AT. These results are consistent with previous studies, indicating that training stimulates horses to enhance the rate of glycogen synthesis in vivo, thereby providing substantial energy for the body. Additionally, the diastole of vascular smooth muscle induces vasodilation, which increases blood flow , ultimately enhancing exercise performance. Cis-aconitic acid serves as a crucial intermediate in the tricarboxylic acid cycle, which is indicative of the extent of active aerobic metabolism. This cycle represents the final metabolic pathway for the complete oxidation of the three primary energy substrates and plays a vital role in supplying energy for sustained exercise training . Huang et al. demonstrated that endurance training in rats significantly enhanced both the rate of the tricarboxylic acid cycle and the organism’s antioxidant activity, resulting in increased levels of intermediates such as pyruvate, malate, and aconitate . In the present study, there was a significant increase in the concentration of cis-aconitic acid in the blood of horses after training. This observation suggests that the training process stimulates the acceleration of the tricarboxylic acid cycle, leading to the release of more intermediates that provide energy to the body, thereby enhancing the endurance and athletic performance of horses. This study selected 2-year-old trot-type Yili horses as experimental subjects. Although horses at this age stage are in a period of rapid development of athletic capacity with a relatively stable physiological status, age-related factors may still present the following potential risks: 2-year-old horses are in a critical period of skeletal muscle development and metabolic system maturation, where gene expression networks may exhibit dynamic regulatory characteristics due to changes in growth hormone and sex hormone levels. The observed differences in specific gene expression may be partially attributed to age-dependent regulation rather than solely reflecting training effects. Under identical training protocols, individual variations in gene expression and metabolic adaptation may be amplified by age factors. Therefore, the conclusions of this study may be specific to this particular developmental stage and cannot be generalized to horses of other age groups. Future research could further incorporate horses from different age groups to comprehensively evaluate the impact of age on relevant gene expression and metabolic regulation. In summary, through the integrated analysis of transcriptomics and metabolomics, seven genes— CCL5 , CCR3 , FOS , CD3E , HSD17B1 , C1QTNF12 , and GATA1 —were identified. These genes are primarily associated with the athletic performance of trot-type Yili horses, highlighting their importance in the influence of training on equine exercise performance. Further investigation into their functions will aid in enhancing the athletic performance of trot-type Yili horses. Subsequent studies are required to assess their applicability to other horse breeds.
COVID-19 vaccine short-term adverse events in the real-life family practice in Krakow, Poland
21cefbb8-b774-41cf-9eea-cf0d793322e4
10249448
Family Medicine[mh]
The global spread of SARS-CoV-2 and the pandemic of COVID-19 forced the governments and healthcare systems, including primary care, to adapt quickly to this unprecedented situation . The scientific community and pharmaceutical companies made efforts to develop vaccines. In December 2020, supported by the evidence on the first mRNA vaccine (mRNAV) , the US Food and Drug Administration gave emergency authorisation to use this vaccine in the general population . Soon, European Medicines Agency took a similar decision . In Poland, four COVID-19 vaccines were available, offered within the national vaccination programme framework. They were mRNAV (Comirnaty ® by Pfizer/BioNTech, Spikevax ® by Moderna) and adenovirus vector vaccines (VVV) (Vaxzevria ® by AstraZeneca and Jcovden ® by Janssen/Johnson & Johnson). The manufacturer of Comirnaty said the most frequent adverse reactions in their clinical studies were: injection site pain (>80%) and swelling (10%), fatigue (>60%), headache (>50%), myalgia (>40%), chills (>30%), arthralgia (>20%) and fever (>10%) . The Jcovden manufacturer stated, based on their studies, that the most common local reaction was injection site pain (48.6%), and systemic reactions were headache (38.9%), fatigue (38.2%), myalgia (33.2%) and nausea (14.2%) . Similar safety information was provided by the manufacturers of Spikevax and Vaxzevria . A nationwide COVID-19 vaccination programme was started in Poland on 28 December 2020 and has challenged the efficiency of the vaccine adverse events (VAEs) reporting system . In Poland, there is a compulsory VAEs reporting system. Healthcare professionals are obliged to report VAEs to the State Sanitary Inspection. Then, the National Institute of Public Health (NIPH) publishes reports on COVID-19 VAEs. NIPH, in their first report based on epidemiological surveillance (27 December 2020–26 April 2021), reported <7000 VAEs, of which >85% were mild and only 2% severe. With 10.5 million vaccinations, only <0.1% were reported as associated with any VAE . At the time, anti-vaccination movements intensified their activity, accusing the government and pharmaceutical companies of misinforming and disrupting data on the true prevalence of VAEs. This situation was similar to what was observed globally and could concern primary care physicians, too . As a result, <60% of the Polish population has been vaccinated against COVID-19, which is significantly below the European Union average (75%) . The study aimed to explore the prevalence of VAEs in general practice settings and the factors that may influence it. It was designed to answer the following questions: how frequent and severe are short-term VAEs in the adult population vaccinated against COVID-19 in the primary care setting? Is the real-life frequency of VAEs different from that reported by vaccine manufacturers or governmental institutions? What are the patients’ characteristics associated with experienced VAEs? Study design This is a pragmatic, mixed prospective and retrospective study on patient self-monitored COVID-19 VAEs. The Jagiellonian University Bioethics Committee approved the study on 21 April 2021 (Opinion No. 1072.6120.71.2021) Recruitment and data collection The participants were recruited between 2 May and 31 October 2021 in public COVID-19 vaccination hubs organised in three GP practices in Krakow, Poland. Right before the vaccination, the trained fieldworkers (medical students) invited the patients to participate. All adults ≥18 years old who were qualified for the vaccination, agreed to participate and gave their informed consent were included in the study. First, the participants answered questions about their characteristics (age, sex, level of education, professional status, smoking status, history of COVID-19, and chronic medical conditions, with a particular interest in allergies and anaphylactic reactions). Their body weight and height were assessed, and their body mass index (BMI) was calculated. When the fieldworkers recruited a patient receiving their second dose in the two-dose vaccination scheme (Comirnaty, Moderna COVID-19 vaccine, or Vaxzevria), they interviewed them about all VAEs they experienced after the first dose (given 4 weeks earlier). Then, 4–6 days after vaccination all participants were contacted by telephone for a follow-up interview. The patients recruited before their first dose in the two-dose vaccination scheme were contacted again 3–5 weeks later (4–6 days after the second dose). The list of VAEs was established based on the literature review. The multiple-choice questions included local (injection site pain, erythema or swelling) and systemic symptoms (fever, chills, myalgia, arthralgia, axillary tenderness, headache, ocular problems, seizures, allergic reactions, presyncope, transient loss of consciousness, sleep/circadian rhythm disturbances, excessive fatigue/lethargy, feeling of confusion, heart palpitations or other cardiovascular symptoms, limb discolouration, diarrhoea, vomiting and nausea). In both categories, the participants could add any other symptoms they experienced. For this study, we classified the VAEs as ‘severe’ when they required emergency treatment (such as anaphylactic shock), need for hospital admission or when causing death. All others were classified as ‘mild’. Data digitalisation and database processing Questionnaires were digitalised and transferred to a database by a health-data processing company. Then, we verified data transfer quality by comparing 5% of randomly selected questionnaires with corresponding digital records. Only a few minor and negligible errors were found. Finally, before the statistical analysis started, we checked the database for data validity and conducted database cleaning. Statistical analysis To illustrate respondents’ characteristics and frequency of VAEs, we calculated distributions for categorical data and means with standard deviations for quantitative data. Differences between groups were assessed using Chi-square or t-test for dependent groups (respectively for the type of data). Multivariable logistic regression modelling was used to explore the possible influence of independent variables (sex, age, education, BMI, smoking status, chronic medical conditions, history of allergy, anaphylactic shock, COVID-19 infection, type of vaccine) on the occurrence of each VAE which occurred in at least 5% cases. Similarly, multivariable linear regression modelling was used for the total number of local and systemic VAEs. An alpha level of p = 0.05 was accepted as the statistical significance test. As we performed many comparisons (including many measures and VAEs), we corrected the level of statistical significance (with the Holm–Bonferroni method). We used Statistica 13 software (Statsoft Inc.). This is a pragmatic, mixed prospective and retrospective study on patient self-monitored COVID-19 VAEs. The Jagiellonian University Bioethics Committee approved the study on 21 April 2021 (Opinion No. 1072.6120.71.2021) The participants were recruited between 2 May and 31 October 2021 in public COVID-19 vaccination hubs organised in three GP practices in Krakow, Poland. Right before the vaccination, the trained fieldworkers (medical students) invited the patients to participate. All adults ≥18 years old who were qualified for the vaccination, agreed to participate and gave their informed consent were included in the study. First, the participants answered questions about their characteristics (age, sex, level of education, professional status, smoking status, history of COVID-19, and chronic medical conditions, with a particular interest in allergies and anaphylactic reactions). Their body weight and height were assessed, and their body mass index (BMI) was calculated. When the fieldworkers recruited a patient receiving their second dose in the two-dose vaccination scheme (Comirnaty, Moderna COVID-19 vaccine, or Vaxzevria), they interviewed them about all VAEs they experienced after the first dose (given 4 weeks earlier). Then, 4–6 days after vaccination all participants were contacted by telephone for a follow-up interview. The patients recruited before their first dose in the two-dose vaccination scheme were contacted again 3–5 weeks later (4–6 days after the second dose). The list of VAEs was established based on the literature review. The multiple-choice questions included local (injection site pain, erythema or swelling) and systemic symptoms (fever, chills, myalgia, arthralgia, axillary tenderness, headache, ocular problems, seizures, allergic reactions, presyncope, transient loss of consciousness, sleep/circadian rhythm disturbances, excessive fatigue/lethargy, feeling of confusion, heart palpitations or other cardiovascular symptoms, limb discolouration, diarrhoea, vomiting and nausea). In both categories, the participants could add any other symptoms they experienced. For this study, we classified the VAEs as ‘severe’ when they required emergency treatment (such as anaphylactic shock), need for hospital admission or when causing death. All others were classified as ‘mild’. Questionnaires were digitalised and transferred to a database by a health-data processing company. Then, we verified data transfer quality by comparing 5% of randomly selected questionnaires with corresponding digital records. Only a few minor and negligible errors were found. Finally, before the statistical analysis started, we checked the database for data validity and conducted database cleaning. To illustrate respondents’ characteristics and frequency of VAEs, we calculated distributions for categorical data and means with standard deviations for quantitative data. Differences between groups were assessed using Chi-square or t-test for dependent groups (respectively for the type of data). Multivariable logistic regression modelling was used to explore the possible influence of independent variables (sex, age, education, BMI, smoking status, chronic medical conditions, history of allergy, anaphylactic shock, COVID-19 infection, type of vaccine) on the occurrence of each VAE which occurred in at least 5% cases. Similarly, multivariable linear regression modelling was used for the total number of local and systemic VAEs. An alpha level of p = 0.05 was accepted as the statistical significance test. As we performed many comparisons (including many measures and VAEs), we corrected the level of statistical significance (with the Holm–Bonferroni method). We used Statistica 13 software (Statsoft Inc.). Participants’ characteristics Overall, the fieldworkers invited 1530 adults who had received their COVID-19 vaccine, and 1071 (70%) agreed, signed informed consent and answered the questions of the first part of the questionnaire. We collected data on 760 patients vaccinated with Jcovden, 293 with Comirnaty, 17 with Vaxzevria and one with Spikevax. Owing to the small number of Vaxzevria and Spikevax participants, we did not include them in the analysis. We discarded two records from the Jcovden group because of substantial questionnaire gaps. Finally, we included data from 1051 (69%) patients: 758 receiving Jcovden and 293 receiving Comirnaty (of whom 231 [78.8%] were recruited when receiving the second dose). The participants vaccinated with Comirnaty and those with Jcovden differed in their characteristics and comorbidities but not in the history of allergy/anaphylaxis or COVID-19. presents the details. Vaccine adverse events Only every eleventh patient (8.8%) had neither local nor systemic reactions (10% in the Jcovden group vs 5.3% in the Comirnaty group), 227 participants reported no localised VAEs (10.2% in the Comirnaty group and 26.3% in the Jcovden group) and 250 participants experienced no systemic reactions (29.4% in the Comirnaty group and 23.2% in the Jcovden group). presents the details and presents the percentage of participants experiencing local and generalised VAEs. Only 50 participants reported VAEs that required immediate medical assistance: 39 (3.7%) experienced presyncope (all in persons without a history of anaphylactic shock) and only 11 (1.1%) reported transient loss of consciousness after receiving a vaccine (and although there was a statistically significant correlation with a history of anaphylactic shock [ p = 0.04], this finding cannot be generalised). In two cases we had reliable information about the hospital admission after vaccination (one because of nausea and vomiting, one because of tonsilitis; both in the VVV group). Factors associated with VAEs The logistic regression modelling identified factors correlated with reported VAEs. Female sex and history of COVID-19 were the most frequently positively correlated with systemic VAEs, while older age correlated negatively with most of them. Patients vaccinated with Comirnaty were less likely to have systemic VAEs, with axillary tenderness as the exemption. and present statistically relevant correlations between patients’ characteristics or type of vaccine and VAEs. With the linear regression modelling, we also identified predictors of the number of local ( Supplementary Table 1 ) and systemic ( Supplementary Table 2 ) VAEs. However, those models poorly explain the variability (corrected R2 coefficients are 0.07 and 0.12 for local and systemic VAEs, respectively). Overall, the fieldworkers invited 1530 adults who had received their COVID-19 vaccine, and 1071 (70%) agreed, signed informed consent and answered the questions of the first part of the questionnaire. We collected data on 760 patients vaccinated with Jcovden, 293 with Comirnaty, 17 with Vaxzevria and one with Spikevax. Owing to the small number of Vaxzevria and Spikevax participants, we did not include them in the analysis. We discarded two records from the Jcovden group because of substantial questionnaire gaps. Finally, we included data from 1051 (69%) patients: 758 receiving Jcovden and 293 receiving Comirnaty (of whom 231 [78.8%] were recruited when receiving the second dose). The participants vaccinated with Comirnaty and those with Jcovden differed in their characteristics and comorbidities but not in the history of allergy/anaphylaxis or COVID-19. presents the details. Only every eleventh patient (8.8%) had neither local nor systemic reactions (10% in the Jcovden group vs 5.3% in the Comirnaty group), 227 participants reported no localised VAEs (10.2% in the Comirnaty group and 26.3% in the Jcovden group) and 250 participants experienced no systemic reactions (29.4% in the Comirnaty group and 23.2% in the Jcovden group). presents the details and presents the percentage of participants experiencing local and generalised VAEs. Only 50 participants reported VAEs that required immediate medical assistance: 39 (3.7%) experienced presyncope (all in persons without a history of anaphylactic shock) and only 11 (1.1%) reported transient loss of consciousness after receiving a vaccine (and although there was a statistically significant correlation with a history of anaphylactic shock [ p = 0.04], this finding cannot be generalised). In two cases we had reliable information about the hospital admission after vaccination (one because of nausea and vomiting, one because of tonsilitis; both in the VVV group). The logistic regression modelling identified factors correlated with reported VAEs. Female sex and history of COVID-19 were the most frequently positively correlated with systemic VAEs, while older age correlated negatively with most of them. Patients vaccinated with Comirnaty were less likely to have systemic VAEs, with axillary tenderness as the exemption. and present statistically relevant correlations between patients’ characteristics or type of vaccine and VAEs. With the linear regression modelling, we also identified predictors of the number of local ( Supplementary Table 1 ) and systemic ( Supplementary Table 2 ) VAEs. However, those models poorly explain the variability (corrected R2 coefficients are 0.07 and 0.12 for local and systemic VAEs, respectively). Main findings This study has analysed data from 1051 patients vaccinated with mRNAV (Comirnaty) or VVV (Jcovden). The vaccination proved safe in short-term observation; 8.8% of the participants reported neither local nor systemic VAEs. There were only 11 cases of transient loss of consciousness and two other hospital admission cases. Those receiving Jcovden were more likely to develop systemic reactions and less likely to have local symptoms when compared with the Comirnaty group. The history of anaphylactic reaction correlated with the increased risk of allergic reactions, nausea and cardiovascular symptoms. Female sex, younger age and a history of COVID-19 were the other predictors for experiencing systemic VAEs. Older and obese patients were less likely to report local reactions. Current cigarette smoking was a ‘protective’ factor for myalgia, headaches and excessive fatigue. Strengths and limitations So far, this is one of the few studies to actively investigate VAEs in the primary care setting (data from an Australian large-scale system are already available , while those from Europe have not yet been published ). Our findings reflect the possible daily experience of primary care physicians vaccinating their patients against COVID-19. The other advantage of this project is that we conducted it during the national vaccination programme, when data on the safety of COVID-19 vaccination in the general population were still scarce. Given that, the bias risk in both participants and fieldworkers could be lower than in the case of retrospective interviews or analyses. Contrary to the compulsory VAEs reporting system, we could identify patients with no or mild VAEs. In opposition to the studies recruiting volunteers, there was a lower risk of overreporting severe or multiple VAEs. A specific limitation of the study is the relatively small number of patients and practices, as well as a limited geographical range. The low number of severe VAEs could result from general advice for persons with a history of severe allergic reactions to be vaccinated at nodal hospitals only. The other limitation is the small number of patients receiving Vaxzevria or Spikevax. That was beyond our control, as the central government agency supplied vaccination hubs, so the participating GP practices depended entirely on this system. Another disadvantage is that some of the data on VAEs after the first dose were collected retrospectively. However, the short period between the doses (4 weeks) makes the answers reliable. It is essential to underline that the presented results do not allow for reasoning about long-term VAEs. Comparison with existing literature Our findings are consistent with other studies on the Polish population and with international reports on the safety of COVID-19 vaccines. Other studies’ findings on the selected groups in the Polish population showed that COVID-19 vaccines’ VAEs are frequent but mild, similar to our results. Unlike Li et al. , we did not find a correlation between the history of allergy and any specific VAE and only a weak correlation with the number of systemic VAEs. More frequent reporting of VAEs by females or persons with a higher level of education could be partially explained by the differences in the attitude towards safety, as Syan et al. conclude in their study on the Canadian population . Similar higher rates of VAEs were observed in females in Japan, but Urakawa et al. explain this with the role of sex hormones in immune response . It is difficult to conclude whether the observed more frequent VAEs in younger patients depend on the type of vaccine or the age of the patients (those receiving VVV were younger than those vaccinated with mRNAV). However, Chen et al. in their meta-analysis and Wu et al. in their review found that VAEs were more frequent in the case of vaccination with VVV when compared with mRNAV. They also state that VAEs were more frequently observed in younger patients , also remarked by Urakawa et al. . At some point, it is surprising that normal BMI compared with obesity correlated positively with the risk of generalised VAEs, yet it is consistent with the results of Iguacel et al. . Similar to our observations, Tissot et al. identified a history of COVID-19 infection as a predictor of post-vaccination reactions . Recommendations for clinical practice and future research Although the limitations do not allow one to generalise the findings, the study’s results might add evidence in the subject of COVID-19 vaccines’ safety. The real-life-based study’s results can help in discussions with hesitant patients and physicians. It also shows the need to maintain primary care research networks to facilitate data collection in GP practices covering broad areas and large populations. Pragmatic studies of the long-term safety of the vaccination might contribute to building a larger body of knowledge about the studied issues. This study has analysed data from 1051 patients vaccinated with mRNAV (Comirnaty) or VVV (Jcovden). The vaccination proved safe in short-term observation; 8.8% of the participants reported neither local nor systemic VAEs. There were only 11 cases of transient loss of consciousness and two other hospital admission cases. Those receiving Jcovden were more likely to develop systemic reactions and less likely to have local symptoms when compared with the Comirnaty group. The history of anaphylactic reaction correlated with the increased risk of allergic reactions, nausea and cardiovascular symptoms. Female sex, younger age and a history of COVID-19 were the other predictors for experiencing systemic VAEs. Older and obese patients were less likely to report local reactions. Current cigarette smoking was a ‘protective’ factor for myalgia, headaches and excessive fatigue. So far, this is one of the few studies to actively investigate VAEs in the primary care setting (data from an Australian large-scale system are already available , while those from Europe have not yet been published ). Our findings reflect the possible daily experience of primary care physicians vaccinating their patients against COVID-19. The other advantage of this project is that we conducted it during the national vaccination programme, when data on the safety of COVID-19 vaccination in the general population were still scarce. Given that, the bias risk in both participants and fieldworkers could be lower than in the case of retrospective interviews or analyses. Contrary to the compulsory VAEs reporting system, we could identify patients with no or mild VAEs. In opposition to the studies recruiting volunteers, there was a lower risk of overreporting severe or multiple VAEs. A specific limitation of the study is the relatively small number of patients and practices, as well as a limited geographical range. The low number of severe VAEs could result from general advice for persons with a history of severe allergic reactions to be vaccinated at nodal hospitals only. The other limitation is the small number of patients receiving Vaxzevria or Spikevax. That was beyond our control, as the central government agency supplied vaccination hubs, so the participating GP practices depended entirely on this system. Another disadvantage is that some of the data on VAEs after the first dose were collected retrospectively. However, the short period between the doses (4 weeks) makes the answers reliable. It is essential to underline that the presented results do not allow for reasoning about long-term VAEs. Our findings are consistent with other studies on the Polish population and with international reports on the safety of COVID-19 vaccines. Other studies’ findings on the selected groups in the Polish population showed that COVID-19 vaccines’ VAEs are frequent but mild, similar to our results. Unlike Li et al. , we did not find a correlation between the history of allergy and any specific VAE and only a weak correlation with the number of systemic VAEs. More frequent reporting of VAEs by females or persons with a higher level of education could be partially explained by the differences in the attitude towards safety, as Syan et al. conclude in their study on the Canadian population . Similar higher rates of VAEs were observed in females in Japan, but Urakawa et al. explain this with the role of sex hormones in immune response . It is difficult to conclude whether the observed more frequent VAEs in younger patients depend on the type of vaccine or the age of the patients (those receiving VVV were younger than those vaccinated with mRNAV). However, Chen et al. in their meta-analysis and Wu et al. in their review found that VAEs were more frequent in the case of vaccination with VVV when compared with mRNAV. They also state that VAEs were more frequently observed in younger patients , also remarked by Urakawa et al. . At some point, it is surprising that normal BMI compared with obesity correlated positively with the risk of generalised VAEs, yet it is consistent with the results of Iguacel et al. . Similar to our observations, Tissot et al. identified a history of COVID-19 infection as a predictor of post-vaccination reactions . Although the limitations do not allow one to generalise the findings, the study’s results might add evidence in the subject of COVID-19 vaccines’ safety. The real-life-based study’s results can help in discussions with hesitant patients and physicians. It also shows the need to maintain primary care research networks to facilitate data collection in GP practices covering broad areas and large populations. Pragmatic studies of the long-term safety of the vaccination might contribute to building a larger body of knowledge about the studied issues. We conclude that more than 90% of patients vaccinated against COVID-19 in primary care settings may experience VAEs in a short-term follow-up, and they are mostly mild. Their frequency is close to the manufacturers’ declarations but higher than reported by state institutions. Females, younger patients, those with higher education or a history of COVID-19 may experience systemic VAEs more frequently, while older and obese people are less likely to report local reactions. Supplemental Material Click here for additional data file.
Classification of kinesiophobia in patients after cardiac surgery under extracorporeal circulation in China: latent profile and influencing factors analysis from a cross-sectional study
c98b96bc-2e44-4d18-bf86-7285b30ccec7
11758704
Surgical Procedures, Operative[mh]
Kinesiophobia is defined as an irrational and excessive fear of carrying out a physical movement. Previous studies have reported that psychological factors, such as kinesiophobia, are a significant barrier to patient participation in cardiac rehabilitation (CR). In the context of cardiac disease, it is mostly described as a fear of physical activity due to the apprehension of worsening cardiac disease or the possibility of inducing adverse outcomes. Kinesiophobia was detected in 65% of individuals with chronic heart failure and in 86.26% of patients with a first-time acute myocardial infarction. According to a previous study, high levels of kinesiophobia can negatively impact not only the performance of daily activities but also CR engagement. Research has shown that kinesiophobia has an influence and plays an intermediate role in attendance at CR. As a mediator, kinesiophobia is influenced by predictive factors and has indirect effects. General health and muscle endurance increased the probability of attendance at CR, while self-rated anxiety had the opposite effect. There have been studies exploring whether there are positive changes in kinesiophobia based on CR, with higher levels of aerobic capacity and lower levels of physical activity compared with patients with low levels of kinesiophobia. Results showed a significant reduction in kinesiophobia after an exercise-based CR programme. CR is an important step in the recovery process after cardiac surgery. It is a comprehensive strategy that is aimed at improving a person’s physical, psychological and social functioning. Studies have shown that exercise-based CR can not only reduce mortality and hospital admissions for cardiovascular disease but also improve quality of life and mental well-being. CR’s effectiveness and importance are recommended as level IA by most international cardiovascular societies. Kinesiophobia is a psychological disorder, and we should pay more attention to those subjective factors that are self-influenced and in constant change. In a potential profile analysis of kinesiophobia in patients with coronary heart disease (CHD), objective demographic information was included in the analysis, and the results showed that patients could be divided into three potential types: ‘low fear type’, ‘intermediate fear type’ and ‘high fear type’. However, among the influencing factors of kinesiophobia, objective factors cannot be interfered with by medical staff. In addition, research supports that kinesiophobia is positively correlated with age, but the explanation for these potential differences in age has not been studied. From the perspective of social psychology, the research results of Zhang et al showed that it was important to alleviate kinesiophobia for patients with low subjective social status, but the mechanism of how social support produced positive effects in different kinesiophobia classifications has not been clarified. Clinical professionals should collect objective influence factors as predictive factors, focusing on targeted interventions based on the patient’s own subjective factors. Few studies have investigated the effects of kinesiophobia in patients after cardiac surgery under extracorporeal circulation. The number of cardiac surgeries has increased tremendously in recent years. In China, cardiac surgery volume increased by 8% in 2020 compared with 2012. Extracorporeal circulation replaces cardiopulmonary function in a non-physiological way during cardiac surgery, and the lung function of patients is significantly decreased after the operation, while the blood is in a hypercoagulable state after the operation, and there is a risk of thrombosis. Exercise is the main form of CR, and early postoperative activity is beneficial to patients to reduce postoperative pulmonary complications and thrombotic events. However, due to various reasons, patients with kinesiophobia caused a decline in exercise compliance. The factors of kinesiophobia are complex and highly heterogeneous. From the social psychology perspective, the classification of kinesiophobia in patients after cardiac surgery under extracorporeal circulation has not been well characterised. Previous studies mainly evaluated patients’ kinesiophobia classification from the total score of the scale, which may have the same total score but the score of each item varies greatly. This study fills that gap. Latent profile analysis (LPA) is an ‘individual-centric’ statistical analysis method that can homogenise sequential data, explore the characteristics of groups without categories and ethnic differences in groups, and then analyse their respective influencing factors in different subgroups. We hypothesised that patients with kinesiophobia after cardiac surgery could be accurately divided into three subgroups using the LPA method, and the features between the groups were well distinguished. Based on the classification results of this study, it provides a reliable reference for clinical medical staff to intervene in cases of kinesiophobia. Study design In this cross-sectional study, subgroups of kinesiophobia characteristics and associated factors in patients after heart surgery were investigated. All participants were recruited from a tertiary hospital in North China and completed the questionnaire from April 2022 to April 2023. Participants Participants who met the inclusion criteria were provided with information about the study prior to inclusion, as well as consent and willingness to engage in this study after being fully informed of its objectives. The participants met the following inclusion and exclusion criteria. Inclusion criteria Advised by the doctor to participate in CR. Patients who underwent cardiac surgery under extracorporeal circulation (eg, coronary artery bypass grafting and cardiac valve replacement) 3 months prior to the survey. Adults aged 18–75 years. Conscious, mentally and psychologically competent and able to complete the questionnaire. Exclusion criteria 1. Contraindication to CR (eg, uncontrollable or unstable angina, severe arrhythmias, etc). 2. Refusal to provide personal information for participation in the questionnaire. 3. Recent severe family events (eg, malignancy), psychological instability, individuals actively expressing depressive and anxious tendencies or are suspected of having mild cognitive impairment. Study tools Sociodemographic questionnaire Sociodemographic data were collected, including gender, age, education level, marital status, vocational type, average monthly household income, current residence, smoking status, alcohol consumption, surgical operation approach and postoperative time. Tampa Scale for Kinesiophobia Heart The Chinese version of the Tampa Scale for Kinesiophobia Heart (TSK-SV Heart) was used to assess the kinesiophobia levels of patients. This scale consists of 17 items that assess danger, fear, avoidance and dysfunction. The questions were evaluated using the 4-point Likert scale (1=strongly disagree, 2=disagree, 3=agree and 4=strongly agree). A score of 37 or higher indicates a high level of kinesiophobia. Cardiac Exercise Self-Efficacy Instrument The Cardiac Exercise Self-Efficacy Instrument (CESEI) was developed by Hickey et al to measure exercise self-efficacy in CR patients. In 2021, a Chinese version of CESEI was developed through translation, back translation and cultural adjustment. The Chinese version of the CESEI includes 16 items corresponding to one dimension, which are scored on a scale of 1–5. The total score is determined by the sum of the items scored. The higher the score, the higher the patient’s self-efficacy in CR. The Cronbach’s alpha for the Chinese version of the CESEI is 0.941. Social Support Rating Scale The Social Support Rating Scale (SSRS) was used to examine the levels of social support among the participants. The SSRS comprises 10 items divided into 3 categories: objective support, subjective support and social support use. Low, medium and high levels of social support are represented by total scores of 0–22, 23–44 and 45–66, respectively. The Cronbach’s alpha of the scale is 0.81. Multi-dimensional Fatigue Inventory The Multi-dimensional Fatigue Inventory (MFI-20) was used to determine participants' fatigue levels. General weariness, physical fatigue, reduced activity, diminished motivation and mental fatigue are the five categories of the MFI-20. Responses were given using the 5-point Likert scale ranging from 1 (yes, this is true) to 5 (no, this is not true). This scale has a Cronbach’s alpha of 0.882, and it is regularly used to assess patient weariness with good reliability. Hospital Anxiety and Depression Scale The Hospital Anxiety and Depression Scale (HADS) was used to determine the level of anxiety and depression in participants. The HADS comprises 14 items with 4 possible answers ranging from 0 to 4, as well as 2 subscales: anxiety and depression. The HADS score indicates the severity of anxiety or depression. The higher the HADS score, the more severe the anxiety or sadness. This scale has been tested in a variety of countries. Numerical Rating Scale The Numerical Rating Scale (NRS) is accurate, concise and more feasible. It was once considered the gold standard for pain assessment by the American Pain Society. Patients are asked to select a single number representing the intensity of their pain on a scale of 1–11 (0=no pain and 10=worst pain). A score of 7–8 is classified as severe pain, indicating that the pain is intense. In this cross-sectional study, subgroups of kinesiophobia characteristics and associated factors in patients after heart surgery were investigated. All participants were recruited from a tertiary hospital in North China and completed the questionnaire from April 2022 to April 2023. Participants who met the inclusion criteria were provided with information about the study prior to inclusion, as well as consent and willingness to engage in this study after being fully informed of its objectives. The participants met the following inclusion and exclusion criteria. Inclusion criteria Advised by the doctor to participate in CR. Patients who underwent cardiac surgery under extracorporeal circulation (eg, coronary artery bypass grafting and cardiac valve replacement) 3 months prior to the survey. Adults aged 18–75 years. Conscious, mentally and psychologically competent and able to complete the questionnaire. Exclusion criteria 1. Contraindication to CR (eg, uncontrollable or unstable angina, severe arrhythmias, etc). 2. Refusal to provide personal information for participation in the questionnaire. 3. Recent severe family events (eg, malignancy), psychological instability, individuals actively expressing depressive and anxious tendencies or are suspected of having mild cognitive impairment. Advised by the doctor to participate in CR. Patients who underwent cardiac surgery under extracorporeal circulation (eg, coronary artery bypass grafting and cardiac valve replacement) 3 months prior to the survey. Adults aged 18–75 years. Conscious, mentally and psychologically competent and able to complete the questionnaire. 1. Contraindication to CR (eg, uncontrollable or unstable angina, severe arrhythmias, etc). 2. Refusal to provide personal information for participation in the questionnaire. 3. Recent severe family events (eg, malignancy), psychological instability, individuals actively expressing depressive and anxious tendencies or are suspected of having mild cognitive impairment. Sociodemographic questionnaire Sociodemographic data were collected, including gender, age, education level, marital status, vocational type, average monthly household income, current residence, smoking status, alcohol consumption, surgical operation approach and postoperative time. Tampa Scale for Kinesiophobia Heart The Chinese version of the Tampa Scale for Kinesiophobia Heart (TSK-SV Heart) was used to assess the kinesiophobia levels of patients. This scale consists of 17 items that assess danger, fear, avoidance and dysfunction. The questions were evaluated using the 4-point Likert scale (1=strongly disagree, 2=disagree, 3=agree and 4=strongly agree). A score of 37 or higher indicates a high level of kinesiophobia. Cardiac Exercise Self-Efficacy Instrument The Cardiac Exercise Self-Efficacy Instrument (CESEI) was developed by Hickey et al to measure exercise self-efficacy in CR patients. In 2021, a Chinese version of CESEI was developed through translation, back translation and cultural adjustment. The Chinese version of the CESEI includes 16 items corresponding to one dimension, which are scored on a scale of 1–5. The total score is determined by the sum of the items scored. The higher the score, the higher the patient’s self-efficacy in CR. The Cronbach’s alpha for the Chinese version of the CESEI is 0.941. Social Support Rating Scale The Social Support Rating Scale (SSRS) was used to examine the levels of social support among the participants. The SSRS comprises 10 items divided into 3 categories: objective support, subjective support and social support use. Low, medium and high levels of social support are represented by total scores of 0–22, 23–44 and 45–66, respectively. The Cronbach’s alpha of the scale is 0.81. Multi-dimensional Fatigue Inventory The Multi-dimensional Fatigue Inventory (MFI-20) was used to determine participants' fatigue levels. General weariness, physical fatigue, reduced activity, diminished motivation and mental fatigue are the five categories of the MFI-20. Responses were given using the 5-point Likert scale ranging from 1 (yes, this is true) to 5 (no, this is not true). This scale has a Cronbach’s alpha of 0.882, and it is regularly used to assess patient weariness with good reliability. Hospital Anxiety and Depression Scale The Hospital Anxiety and Depression Scale (HADS) was used to determine the level of anxiety and depression in participants. The HADS comprises 14 items with 4 possible answers ranging from 0 to 4, as well as 2 subscales: anxiety and depression. The HADS score indicates the severity of anxiety or depression. The higher the HADS score, the more severe the anxiety or sadness. This scale has been tested in a variety of countries. Numerical Rating Scale The Numerical Rating Scale (NRS) is accurate, concise and more feasible. It was once considered the gold standard for pain assessment by the American Pain Society. Patients are asked to select a single number representing the intensity of their pain on a scale of 1–11 (0=no pain and 10=worst pain). A score of 7–8 is classified as severe pain, indicating that the pain is intense. Sociodemographic data were collected, including gender, age, education level, marital status, vocational type, average monthly household income, current residence, smoking status, alcohol consumption, surgical operation approach and postoperative time. The Chinese version of the Tampa Scale for Kinesiophobia Heart (TSK-SV Heart) was used to assess the kinesiophobia levels of patients. This scale consists of 17 items that assess danger, fear, avoidance and dysfunction. The questions were evaluated using the 4-point Likert scale (1=strongly disagree, 2=disagree, 3=agree and 4=strongly agree). A score of 37 or higher indicates a high level of kinesiophobia. The Cardiac Exercise Self-Efficacy Instrument (CESEI) was developed by Hickey et al to measure exercise self-efficacy in CR patients. In 2021, a Chinese version of CESEI was developed through translation, back translation and cultural adjustment. The Chinese version of the CESEI includes 16 items corresponding to one dimension, which are scored on a scale of 1–5. The total score is determined by the sum of the items scored. The higher the score, the higher the patient’s self-efficacy in CR. The Cronbach’s alpha for the Chinese version of the CESEI is 0.941. The Social Support Rating Scale (SSRS) was used to examine the levels of social support among the participants. The SSRS comprises 10 items divided into 3 categories: objective support, subjective support and social support use. Low, medium and high levels of social support are represented by total scores of 0–22, 23–44 and 45–66, respectively. The Cronbach’s alpha of the scale is 0.81. The Multi-dimensional Fatigue Inventory (MFI-20) was used to determine participants' fatigue levels. General weariness, physical fatigue, reduced activity, diminished motivation and mental fatigue are the five categories of the MFI-20. Responses were given using the 5-point Likert scale ranging from 1 (yes, this is true) to 5 (no, this is not true). This scale has a Cronbach’s alpha of 0.882, and it is regularly used to assess patient weariness with good reliability. The Hospital Anxiety and Depression Scale (HADS) was used to determine the level of anxiety and depression in participants. The HADS comprises 14 items with 4 possible answers ranging from 0 to 4, as well as 2 subscales: anxiety and depression. The HADS score indicates the severity of anxiety or depression. The higher the HADS score, the more severe the anxiety or sadness. This scale has been tested in a variety of countries. The Numerical Rating Scale (NRS) is accurate, concise and more feasible. It was once considered the gold standard for pain assessment by the American Pain Society. Patients are asked to select a single number representing the intensity of their pain on a scale of 1–11 (0=no pain and 10=worst pain). A score of 7–8 is classified as severe pain, indicating that the pain is intense. The statistical analysis was conducted using SPSS 22.0 software. For normally distributed quantitative data, descriptive statistics were presented as mean±SD, and group comparisons were performed using differential analysis. Qualitative data were described using frequencies and percentages, and group comparisons were performed using the χ 2 test. To establish the LPA model, the Mplus 8.3 software was used. The TSK-SV Heart score of patients after heart surgery was used as the model’s observed variable. The initial model category was set to 1, and the number of model levels was gradually increased. Model fit was assessed using various criteria, including the Akaike information criteria (AIC), Bayesian information criteria (BIC), sample-size-adjusted BIC (aBIC), entropy index, Lo–Mendell–Rubin likelihood ratio test (LMRT) and bootstrap likelihood ratio test (BLRT). Smaller values of AIC, BIC and aBIC indicate better model fit. A higher entropy value closer to 1 suggests a higher probability of accurate individual classification. LMRT and BLRT were used to compare the fit of k-individual models to k−1 models. Based on the results of the LPA of kinesiophobia, a multiple logistic analysis was performed to explore the factors influencing the latent profile classification of patients’ kinesiophobia after heart surgery. The statistical tests were two-tailed, and a difference of p<0.05 was considered statistically significant. Convenience sampling was employed with 412 participants who underwent cardiac surgery under extracorporeal circulation. 42 participants were excluded according to the inclusion and exclusion criteria, leaving 370 eligible participants in the study. 18 questionnaires had more than five blanks or missed important information, while another four questionnaires had the same answer choice for more than five consecutive questions. All the above 22 questionnaires had been excluded. Finally, 348 questionnaires were left for analysis . Demographic characteristics of the participants In the current study, 248 male participants (71.26%) and 100 female participants (28.74%) aged 18–45 years (18.97%), 46–65 years (40.80%) and 66–75 years (40.23%) were included. 252 participants (72.41%) underwent a conventional approach, while 101 participants (27.58%) used sternum-sparing approach. 95 participants (27.29%) were 3–6 months postoperative, 132 participants (37.93%) were 7–12 months postoperative, 90 patients (25.86%) were 13–18 months postoperative and 31 patients (8.90%) were 19–24 months postoperative. Detailed characteristics of the participants are displayed in . LPA of the participants’ kinesiophobia scores LPA was conducted to identify the heterogeneity of kinesiophobia in patients after cardiac surgery under extracorporeal circulation. Four models were initially constructed based on model fit indicators AIC, BIC, aBIC, entropy, LMRT and BLRT. As the classification number increases , the AIC, BIC and aBIC values gradually decrease and reach a minimum in Model 4 with LMRT value 0.131 (>0.05), while the matching indicators in Model 3 all fit well (<0.05). Furthermore, there was no cross in the three levels, as shown in . Therefore, Model 3 of kinesiophobia after heart surgery was accepted in the current study. In Model 3, the different groups were named as the low kinesiophobia group (LKG), moderate kinesiophobia—high-risk perceived symptoms group (MK-HRPSG) and high kinesiophobia—high exercise avoidance group (HK-HEAG). The LKG (72/348, 20.6%) had low scores in all items with a TSK-SV Heart score of (34.08±4.12). The MK-HRPSG (148/348, 42.6%) had moderate scores on all items with a TSK-SV Heart score of (48.91±7.07). The HK-HEAG (128/348, 36.8%) had high scores in all items with a TSK-SV Heart score of (51.81±6.07). One-way analysis of variance of different potential classification impact factors There were differences in different potential classifications of kinesiophobia in participants, and there were statistically significant differences in the degree distribution of age, postoperative time, pain, social support and self-efficacy (p<0.05), as shown in . Multiple logistic regression analysis of potential classification factors Taking LKG as a reference to conduct disordered multi-classification logistic regression analysis, the classifications of kinesiophobia were taken as dependent variables, with significant variables in the above analysis as independent variables and covariables. The results showed that age, postoperative time, self-efficacy, pain and social support were factors influencing the potential classification of kinesiophobia (p<0.05), as shown in . In the current study, 248 male participants (71.26%) and 100 female participants (28.74%) aged 18–45 years (18.97%), 46–65 years (40.80%) and 66–75 years (40.23%) were included. 252 participants (72.41%) underwent a conventional approach, while 101 participants (27.58%) used sternum-sparing approach. 95 participants (27.29%) were 3–6 months postoperative, 132 participants (37.93%) were 7–12 months postoperative, 90 patients (25.86%) were 13–18 months postoperative and 31 patients (8.90%) were 19–24 months postoperative. Detailed characteristics of the participants are displayed in . LPA was conducted to identify the heterogeneity of kinesiophobia in patients after cardiac surgery under extracorporeal circulation. Four models were initially constructed based on model fit indicators AIC, BIC, aBIC, entropy, LMRT and BLRT. As the classification number increases , the AIC, BIC and aBIC values gradually decrease and reach a minimum in Model 4 with LMRT value 0.131 (>0.05), while the matching indicators in Model 3 all fit well (<0.05). Furthermore, there was no cross in the three levels, as shown in . Therefore, Model 3 of kinesiophobia after heart surgery was accepted in the current study. In Model 3, the different groups were named as the low kinesiophobia group (LKG), moderate kinesiophobia—high-risk perceived symptoms group (MK-HRPSG) and high kinesiophobia—high exercise avoidance group (HK-HEAG). The LKG (72/348, 20.6%) had low scores in all items with a TSK-SV Heart score of (34.08±4.12). The MK-HRPSG (148/348, 42.6%) had moderate scores on all items with a TSK-SV Heart score of (48.91±7.07). The HK-HEAG (128/348, 36.8%) had high scores in all items with a TSK-SV Heart score of (51.81±6.07). There were differences in different potential classifications of kinesiophobia in participants, and there were statistically significant differences in the degree distribution of age, postoperative time, pain, social support and self-efficacy (p<0.05), as shown in . Taking LKG as a reference to conduct disordered multi-classification logistic regression analysis, the classifications of kinesiophobia were taken as dependent variables, with significant variables in the above analysis as independent variables and covariables. The results showed that age, postoperative time, self-efficacy, pain and social support were factors influencing the potential classification of kinesiophobia (p<0.05), as shown in . Characteristics of participants The data for this study were collected from a tertiary hospital in the country with an ample volume of cardiac surgeries. In this study, there was a higher proportion of male participants compared with female participants (male:female=2.48:1), which aligns with findings from previous studies. It is noteworthy that female participants constitute no more than 30% of the study population in research trials. There are several factors contributing to this gender disparity. First, mortality rates and risk for the most prevalent cardiovascular diseases consistently tend to be higher among men than women. Moreover, women face disparities in wealth, income and access to resources, which can hinder their timely access to medical care. A study revealed that women with low incomes, low levels of education and residing in deprived areas are more likely to delay seeking medical attention. Lastly, a lack of awareness among women about the importance of CHD and emergency care contributes to delays in seeking medical attention. A report from the European Heart Survey found that women aged 60 years and above were less likely to undergo coronary artery bypass grafting(CABG) compared with men, whereas they were more likely to receive percutaneous coronary intervention (PCI), adding to the gender differences in surgical treatment. Analysis of a 3-classification model of kinesiophobia score Among the models tested, Model 3 exhibited significant characteristics. As our results indicate, Model 4 had the best-fitting AIC, BIC and entropy values. Model 4 allows variances and covariances to be freely estimated and varied across profiles. However, this improvement in fit came at the expense of differences when compared with other models. Model 4 had a lower LMRT value (0.131) in particular. Ultimately, we selected Model 3 due to its relatively low AIC and BIC values, along with a high entropy value of 0.873, suggesting accurate classification of participants into the appropriate profile. Furthermore, the LMRT and BLRT values in Model 3 were low (0.05 and 0.01, respectively), indicating good model fit. The researchers established a score of 37 as the threshold for determining the presence of kinesiophobia. A score of 37 or higher indicates a high level of kinesiophobia. Notably, participants in the LKG in Model 3, obtained a mean score of (34.08±4.12), indicating a lower degree of kinesiophobia. This finding aligns with previous studies. The MK-HRPSG in Model 3 scored lower on items 8 (2.00±0.73) and 16 (1.90±0.75) compared with an average item score of 2.33. These items belong to the risk perception dimension. The results suggest that participants in the MK-HRPSG exhibit a heightened perception of risk, which may result in decreased adherence to recommended treatments. In contrast, participants in the HK-HEAG in Model 3 obtained higher scores on item 2 (3.43±0.55) and lower scores on item 4 (2.86±0.84) compared with the average item score of 3.03. These particular items are indicative of exercise avoidance tendencies. Consequently, participants in the HK-HEAG in Model 3 may demonstrate greater resistance to and avoidance of physical activity when advised by their doctors regarding exercise prescriptions. Based on the presented results, Model 3 effectively minimises individual heterogeneity by considering latent traits, resulting in the identification of distinct subgroups. The results of this study showed potential classifications of kinesiophobia, which were mainly affected by age, postoperative time, self-efficacy, pain and social support. Factors affecting classification We observed that age under 45 years did not play a role in influencing the classification of kinesiophobia. In this study, age was analysed categorically as a rank variable, which differed from previous studies that treated age as a continuous variable. Remarkably, our findings emphasised the significance of age above 45 years, specifically indicating that patients aged over 45 years were more likely to exhibit tendencies towards HK-HEAG. These results align with existing research showing higher levels of kinesiophobia among older adults, which is consistent with our research. The increased kinesiophobia with ageing can be attributed to factors such as physical frailty, which not only leads to a decrease in energy but also heightens the fear of injury and falling. In addition, it is difficult for the elderly to acquire scientific knowledge about kinesiophobia, which further aggravates exercise avoidance. However, Gunn et al reported an adverse association between age and kinesiophobia among adults, suggesting that older individuals may have more available time and exercise experience, which reduces their anxiety towards potentially harmful activities. It is important to note that generalising findings based on age is not appropriate, as age under 45 years did not prove to be a significant factor in our study. Future studies should focus on exploring the kinesiophobia classification in older adults. There is a time effect of kinesiophobia in the postoperative period. Our study discovered that the period between 3 and 6 months after surgery is a critical time frame for kinesiophobia concerns. The level of kinesiophobia decreased with the increase in postoperative time, and postoperative time was not a factor affecting the classification of kinesiophobia after 6 months. The longer the postoperative period, the less likely they were to be classified as HK-HEAG. This finding aligns with previous studies conducted on patients following an acute coronary artery disease event, which reported that kinesiophobia scores were highest (32.5) at baseline, decreasing to 30.9 after 2 weeks and 30.1 after 4 months, suggesting a decline in kinesiophobia over time. This trend may be due to the gradual recovery of exercise capacity and cardiac function over time, as patients can gradually tolerate increased exercise and feel the benefits of participating in it. Early postoperative activity has been shown to improve functional recovery time, especially through early postoperative activities on the floor. Clinical staff should help patients overcome their kinesiophobia as early as possible to promote their engagement in CR as soon as possible, improve their cardiopulmonary function, reduce the incidence of postoperative venous thrombosis and minimise the length of hospital stay. Self-efficacy plays an important role in kinesiophobia classifications. In our study, we observed a negative association between self-efficacy and kinesiophobia, which aligns with previous research findings. Patients with high self-efficacy scores were more likely to be classified as MK-HRPSG. According to Schwarzer et al ’s theory, individuals with low self-efficacy face difficulties accepting their health status and have lower confidence and expectations regarding exercise. They exhibit extreme reluctance to seek help when faced with unexpected traumatic events during exercise and are more prone to kinesiophobia. However, patients with high self-efficacy demonstrate favourable psychological adaptation and coping skills when faced with heart surgery, enabling them to approach challenges more proactively. Consequently, enhancing self-efficacy is an effective measure for preventing and alleviating kinesiophobia, and various interventions focused on increasing self-efficacy are currently available in postoperative settings. Further studies are needed to determine whether their use in patients undergoing cardiac surgery results in positive outcomes. Additionally, the inclusion of pain measurements in kinesiophobia assessments is essential. The fear-avoidance model theory suggests that if patients perceive pain as a frightening stimulus and experience an exacerbation of pain, they adopt negative coping mechanisms to avoid activities that trigger pain, thus exhibiting kinesiophobia. Therefore, it is necessary to provide patients with education on pain perception, help them understand the benefits of exercise, relieve their fear of pain and enhance their confidence in engaging in physical activity. Social support emerges as the primary factor influencing kinesiophobia in MK-HRPSG patients. Social support was negatively correlated with the classification level of kinesiophobia. It is consistent with the results of a qualitative study on 16 female patients by Keessen et al . In accordance with social support theory, individuals with ample social support are more inclined to confide their negative emotions to family, friends and social networks. This, in turn, boosts their confidence in facing discomfort and diminishes kinesiophobia. Notably, our observations revealed no significant correlation between social support and kinesiophobia in the HK-HEAG. As a result, we postulate that alternative interventions should be explored to alleviate kinesiophobia in HK-HEAG patients, other than the domain of social support. Limitations of this study This study has taken a step in the direction of defining and understanding kinesiophobia in patients in North China. It is possible that other patients with different cultural backgrounds may produce different results. In addition, it is important to emphasise that methodological problems in the research design limit our interpretations. Self-report questionnaires in data collection may also influence the assessment of kinesiophobia in an objective approach. Finally, the LPA method has advantages in group classification, in which the selection process is decided by researchers according to the comprehensive judgement of indicators. Thus, it is likely that the results involve some subjectivity. The data for this study were collected from a tertiary hospital in the country with an ample volume of cardiac surgeries. In this study, there was a higher proportion of male participants compared with female participants (male:female=2.48:1), which aligns with findings from previous studies. It is noteworthy that female participants constitute no more than 30% of the study population in research trials. There are several factors contributing to this gender disparity. First, mortality rates and risk for the most prevalent cardiovascular diseases consistently tend to be higher among men than women. Moreover, women face disparities in wealth, income and access to resources, which can hinder their timely access to medical care. A study revealed that women with low incomes, low levels of education and residing in deprived areas are more likely to delay seeking medical attention. Lastly, a lack of awareness among women about the importance of CHD and emergency care contributes to delays in seeking medical attention. A report from the European Heart Survey found that women aged 60 years and above were less likely to undergo coronary artery bypass grafting(CABG) compared with men, whereas they were more likely to receive percutaneous coronary intervention (PCI), adding to the gender differences in surgical treatment. Among the models tested, Model 3 exhibited significant characteristics. As our results indicate, Model 4 had the best-fitting AIC, BIC and entropy values. Model 4 allows variances and covariances to be freely estimated and varied across profiles. However, this improvement in fit came at the expense of differences when compared with other models. Model 4 had a lower LMRT value (0.131) in particular. Ultimately, we selected Model 3 due to its relatively low AIC and BIC values, along with a high entropy value of 0.873, suggesting accurate classification of participants into the appropriate profile. Furthermore, the LMRT and BLRT values in Model 3 were low (0.05 and 0.01, respectively), indicating good model fit. The researchers established a score of 37 as the threshold for determining the presence of kinesiophobia. A score of 37 or higher indicates a high level of kinesiophobia. Notably, participants in the LKG in Model 3, obtained a mean score of (34.08±4.12), indicating a lower degree of kinesiophobia. This finding aligns with previous studies. The MK-HRPSG in Model 3 scored lower on items 8 (2.00±0.73) and 16 (1.90±0.75) compared with an average item score of 2.33. These items belong to the risk perception dimension. The results suggest that participants in the MK-HRPSG exhibit a heightened perception of risk, which may result in decreased adherence to recommended treatments. In contrast, participants in the HK-HEAG in Model 3 obtained higher scores on item 2 (3.43±0.55) and lower scores on item 4 (2.86±0.84) compared with the average item score of 3.03. These particular items are indicative of exercise avoidance tendencies. Consequently, participants in the HK-HEAG in Model 3 may demonstrate greater resistance to and avoidance of physical activity when advised by their doctors regarding exercise prescriptions. Based on the presented results, Model 3 effectively minimises individual heterogeneity by considering latent traits, resulting in the identification of distinct subgroups. The results of this study showed potential classifications of kinesiophobia, which were mainly affected by age, postoperative time, self-efficacy, pain and social support. We observed that age under 45 years did not play a role in influencing the classification of kinesiophobia. In this study, age was analysed categorically as a rank variable, which differed from previous studies that treated age as a continuous variable. Remarkably, our findings emphasised the significance of age above 45 years, specifically indicating that patients aged over 45 years were more likely to exhibit tendencies towards HK-HEAG. These results align with existing research showing higher levels of kinesiophobia among older adults, which is consistent with our research. The increased kinesiophobia with ageing can be attributed to factors such as physical frailty, which not only leads to a decrease in energy but also heightens the fear of injury and falling. In addition, it is difficult for the elderly to acquire scientific knowledge about kinesiophobia, which further aggravates exercise avoidance. However, Gunn et al reported an adverse association between age and kinesiophobia among adults, suggesting that older individuals may have more available time and exercise experience, which reduces their anxiety towards potentially harmful activities. It is important to note that generalising findings based on age is not appropriate, as age under 45 years did not prove to be a significant factor in our study. Future studies should focus on exploring the kinesiophobia classification in older adults. There is a time effect of kinesiophobia in the postoperative period. Our study discovered that the period between 3 and 6 months after surgery is a critical time frame for kinesiophobia concerns. The level of kinesiophobia decreased with the increase in postoperative time, and postoperative time was not a factor affecting the classification of kinesiophobia after 6 months. The longer the postoperative period, the less likely they were to be classified as HK-HEAG. This finding aligns with previous studies conducted on patients following an acute coronary artery disease event, which reported that kinesiophobia scores were highest (32.5) at baseline, decreasing to 30.9 after 2 weeks and 30.1 after 4 months, suggesting a decline in kinesiophobia over time. This trend may be due to the gradual recovery of exercise capacity and cardiac function over time, as patients can gradually tolerate increased exercise and feel the benefits of participating in it. Early postoperative activity has been shown to improve functional recovery time, especially through early postoperative activities on the floor. Clinical staff should help patients overcome their kinesiophobia as early as possible to promote their engagement in CR as soon as possible, improve their cardiopulmonary function, reduce the incidence of postoperative venous thrombosis and minimise the length of hospital stay. Self-efficacy plays an important role in kinesiophobia classifications. In our study, we observed a negative association between self-efficacy and kinesiophobia, which aligns with previous research findings. Patients with high self-efficacy scores were more likely to be classified as MK-HRPSG. According to Schwarzer et al ’s theory, individuals with low self-efficacy face difficulties accepting their health status and have lower confidence and expectations regarding exercise. They exhibit extreme reluctance to seek help when faced with unexpected traumatic events during exercise and are more prone to kinesiophobia. However, patients with high self-efficacy demonstrate favourable psychological adaptation and coping skills when faced with heart surgery, enabling them to approach challenges more proactively. Consequently, enhancing self-efficacy is an effective measure for preventing and alleviating kinesiophobia, and various interventions focused on increasing self-efficacy are currently available in postoperative settings. Further studies are needed to determine whether their use in patients undergoing cardiac surgery results in positive outcomes. Additionally, the inclusion of pain measurements in kinesiophobia assessments is essential. The fear-avoidance model theory suggests that if patients perceive pain as a frightening stimulus and experience an exacerbation of pain, they adopt negative coping mechanisms to avoid activities that trigger pain, thus exhibiting kinesiophobia. Therefore, it is necessary to provide patients with education on pain perception, help them understand the benefits of exercise, relieve their fear of pain and enhance their confidence in engaging in physical activity. Social support emerges as the primary factor influencing kinesiophobia in MK-HRPSG patients. Social support was negatively correlated with the classification level of kinesiophobia. It is consistent with the results of a qualitative study on 16 female patients by Keessen et al . In accordance with social support theory, individuals with ample social support are more inclined to confide their negative emotions to family, friends and social networks. This, in turn, boosts their confidence in facing discomfort and diminishes kinesiophobia. Notably, our observations revealed no significant correlation between social support and kinesiophobia in the HK-HEAG. As a result, we postulate that alternative interventions should be explored to alleviate kinesiophobia in HK-HEAG patients, other than the domain of social support. This study has taken a step in the direction of defining and understanding kinesiophobia in patients in North China. It is possible that other patients with different cultural backgrounds may produce different results. In addition, it is important to emphasise that methodological problems in the research design limit our interpretations. Self-report questionnaires in data collection may also influence the assessment of kinesiophobia in an objective approach. Finally, the LPA method has advantages in group classification, in which the selection process is decided by researchers according to the comprehensive judgement of indicators. Thus, it is likely that the results involve some subjectivity. This study uses LPA to identify potential classifications of kinesiophobia in patients after cardiac surgery under extracorporeal circulation. The findings indicate that patients fall into three distinct classifications: LMG, MK-HRPSG and HK-HEAG. It is crucial for clinical staff to prioritise addressing kinesiophobia, particularly in older male patients during the early postoperative period. Furthermore, enhancing self-efficacy shows promise as an effective method for reducing kinesiophobia, while increasing social support may not yield desirable outcomes in the HK-HEAG. These findings offer a valuable evidence-based foundation for implementing preventative interventions to address kinesiophobia during CR for patients undergoing cardiac surgery. It is important to note that this study is cross-sectional, and future research should consider expanding the sample size and conducting longitudinal studies to validate the obtained results. 10.1136/bmjopen-2024-083909 online supplemental file 1 10.1136/bmjopen-2024-083909 online supplemental file 2 10.1136/bmjopen-2024-083909 online supplemental file 3 10.1136/bmjopen-2024-083909 online supplemental file 4
CKD in reproductive-aged women: a call for early nephrology referral and multidisciplinary care
b2c1f67b-d402-4d7f-8903-7ff73e0fe33c
11616362
Internal Medicine[mh]
Chronic Kidney Disease (CKD) is a pressing global health issue, affecting an estimated 850 million people worldwide . While CKD affects individuals across all genders and age groups, it has particularly profound impacts on biological females of reproductive age (referred to as women or females in this review), who face unique challenges. For these individuals, CKD complicates several aspects of health and family planning, including fertility, contraception choices, adverse pregnancy outcomes, and CKD progression. Unique health conditions, such as acute kidney injury (AKI) during pregnancy or hypertensive disorders of pregnancy, can significantly affect long term kidney health if not managed early. As rates of hypertension, obesity, and diabetes increase, the incidence of CKD among reproductive age women is expected to rise, highlighting the urgent need for early intervention and specialized care to improve outcomes . This calls for a multidisciplinary approach to detect, monitor, and manage CKD in reproductive-age women. Primary Care Providers (PCPs) play a crucial role in the detection and early management of CKD. Current Kidney Disease Improving Global Outcomes (KDIGO) guidelines recommend nephrology referral primarily for CKD Stage 3b or above or in cases with substantial proteinuria . However, growing evidence suggests that biological females of reproductive age across all stages of CKD face increased risks that impact both kidney and reproductive health . Women with resolved kidney injury, glomerulonephritis in remission, diabetes, or chronic hypertension often face increased risks of both kidney and reproductive complications . Thus, current guidelines and risk prediction equations may not adequately reflect the needs of reproductive individuals with CKD. PCPs, although crucial in CKD management, may not always have the resources or time to discuss and adjust treatment plans comprehensively, particularly for reproductive-age women who may require specialized counseling on safe medication use, management of proteinuria, and blood pressure during pregnancy. The current approach to the timing of nephrology referral may inadvertently place these women at higher risk by potentially delaying specialist intervention until their kidney disease has significantly progressed. A multidisciplinary approach that prioritizes early nephrology referrals can help mitigate disease progression and support safer pregnancies. This review aims to highlight the need for a shift in the approach to the management of CKD among biological females of reproductive age. We will discuss the unique needs of this population, review the current state of care and knowledge gaps, and offer recommendations for improving care and outcomes. Epidemiology, risk factors, and care disparities among women of reproductive age Globally, CKD is more prevalent in women than men, and women younger than 45 have higher all-cause and non-cardiovascular mortality compared to men in the same age group . This increased mortality reproductive-age females is not fully understood, though it may be linked to the impact of CKD on conditions related to reproduction. Current estimates suggest that CKD affects up 6% of women of childbearing age in high-income countries and around 3% of pregnancies are impacted by CKD, although these figures likely underrepresent the true prevalence due to challenges in CKD diagnosis during pregnancy . Diagnosis of CKD in pregnancy remains challenging due to inconsistencies in CKD definitions, single laboratory measurements, non-validated estimated glomerular filtration rates (eGFRs), and insufficient proteinuria data. A large Canadian population study reported a higher CKD rate based on pre-pregnancy eGFR, with 7.5% of pregnancies with mild CKD (eGFR 60–90 ml/min/1.73m 2 ) . However, two-thirds of participants were excluded due to missing or invalid baseline creatinine measures, underscoring the need for improved screening and diagnosis of CKD . As the prevalence of CKD rises, so does the proportion of women of childbearing age with CKD, attributable to increasing rates of risk factors such as obesity, hypertension, and diabetes. Yet data from the 2023 USRDS annual report indicate that only 19.1% of patients aged 18–39 with CKD Stage 3 are receiving nephrology care, a significant portion of whom are women . Notably, this statistic overlooks earlier stages of CKD, where reproductive consequences may be less prominent but still significant. Care disparities in CKD disproportionately affect women, complicating timely diagnosis and management. Data from the United States National Health and Nutritional Examination Survey (NHANES) indicate that CKD awareness is approximately 10.2% lower in women aged 20–49 than in men, which is concerning given the significant implications of CKD on maternal and fetal health . Additional recent studies highlight that women are less likely to be referred to nephrologists and often receive less intensive CKD management than men . A recent Stockholm study demonstrated that women with CKD are less frequently diagnosed, monitored, referred to nephrologists, and prescribed antiproteinuric medications compared to men . Among individuals with diabetes and hypertension, women undergo fewer albuminuria measurements than men, and even when meeting referral criteria, they are less likely to visit a nephrologist within 18 months . Men are also more likely to be referred to a nephrologist at higher eGFR and receive a CKD diagnosis sooner than women . Women are also less likely to start Renin–angiotensin–aldosterone system inhibitors (RAASi) and are more prone to receive potentially inappropriate nephrotoxic medications . These disparities may be influenced by diagnostic limitations, as the use of serum creatinine rather than eGFR for CKD diagnosis can prevent early detection in women, whose lower baseline serum creatinine levels may obscure early signs of CKD. Additionally, prescriber caution regarding the use of RAAS inhibitors and other anti-proteinuric agents such as sodium-glucose cotransporter 2 inhibitors (SGLT2i) in reproductive-age women due to concerns about teratogenicity may also be contributing. Sociocultural factors may add further complexity as women’s prioritization of prioritize family health over personal health can lead to neglected CKD management . These disparities are particularly pronounced in low socioeconomic areas and lower-middle-income countries, where access to comprehensive diagnostic tools, education, and regular monitoring is often limited . Pregnancy introduces further challenges in the diagnosis and management of CKD. Physiological changes during pregnancy, including fluctuations in glomerular filtration rate and proteinuria levels, can complicate accurate diagnosis and monitoring of CKD during this period . Current eGFR equations may also underestimate CKD severity in pregnant individuals, creating potential barriers to appropriate care . This challenge is compounded by pregnancy specific complications, such as hypertensive disorders of pregnancy which may not only exacerbate underlying CKD but can also lead to acute kidney injury (AKI) during pregnancy, with long-term ramifications for kidney health . Individuals with CKD face a greater risk for AKI due to pregnancy complications such as hemorrhage, hyperemesis gravidarum, sepsis, thrombotic microangiopathies, autoimmune disease flares, and obstructive uropathy . Moreover, gestational diabetes or severe metabolic dysfunction during pregnancy can further increase the risk of CKD progression among women with CKD . In high-income countries. advanced maternal age(women over 35 years old) giving birth has become more common and is associated with a range of adverse pregnancy outcomes, including miscarriage, pre-eclampsia, and gestational diabetes . Though the risks may be small in magnitude, they can be compounded among women with CKD, who often have multiple other risk factors for adverse pregnancy outcomes. Recognizing and addressing these risk factors with targeted, pregnancy specific interventions, especially through early nephrology care, is essential to improve kidney outcomes in this vulnerable population. Pregnancy and CKD: risks and complications Pregnancy poses significant challenges for woman with kidney disease due to the complex bidirectional interactions between kidney disease and pregnancy. Women with CKD are at risk for adverse pregnancy outcomes, which include AKI, worsening of proteinuria, and progression of underlying CKD . They are also at increased risk of hypertensive disorders of pregnancy, particularly preeclampsia, which is associated with AKI, accelerated CKD progression, and end-stage kidney disease . Recent data from United Kingdom reveal that 46% of pregnancies in women with CKD stages 3–5 had kidney disease progression -defined as at least 25% reduction in eGFR or the need for renal replacement therapy – within one year postpartum . Additionally, pregnancies in women with CKD stages 3–5 has been shown to shorten the time to renal replacement therapy by 2.5–4.7 years . Given these risks, individuals with advanced CKD should be counseled about the potential irreversible loss of kidney function during pregnancy, which can be severe enough to necessitate the initiation of dialysis. In addition to maternal risks, there is also a significant increase in adverse fetal outcomes such as small gestational age, neonatal intensive care unit admissions, intrauterine growth restriction, and even fetal demise . Maternal and fetal risks vary considerably across CKD stages and are exacerbated by the presence of comorbidities. Studies show that maternal and fetal risks are present even in earlier CKD stages, though generally to a lesser extent than in advanced stages . For instance, worsening hypertension, increased proteinuria, and preeclampsia can develop in up to one-third of pregnant women with mild CKD . Prematurity (birth before 37 weeks), low birth weight, and fetal demise occur at slightly higher rates in women with mild CKD compared those without kidney disease . Comorbidities further elevate these risks, particularly diabetes, chronic hypertension, and autoimmune disorders which can significantly affect maternal and fetal outcomes if not well controlled before pregnancy . Pregnancy-related AKI has a significant impact on maternal and fetal outcomes . During the postpartum period, hemorrhage, infections, antibiotics, and nonsteroidal anti-inflammatory drugs, can all increase the risk of AKI . To manage these risks effectively, patients require thorough education on the potential complications and need for close monitoring and physicians’ guidance in selecting the safest timing for pregnancy. Given the elevated risk of adverse pregnancy outcomes (APOs) across all CKD stages, a multidisciplinary approach that includes early involvement of a nephrologist is essential to optimize outcomes for both mother and child. Pregnancy planning among biological females of reproductive age Pregnancy planning is a critical aspect of care for women with CKD of reproductive age, given the significant impact of kidney disease on fertility and overall reproductive health. CKD often leads to sexual dysfunction and decreased fertility, stemming from both hormonal imbalances and physiological changes. While the underlying causes are only partially understood, they include reduced libido, dyspareunia, and disruptions in the hypothalamic gonadal axis . The specific effects of CKD on the axis include impaired ovulation (menstrual cycle disruptions, anovulation, and hypoestrogenism), dysfunctional uterine bleeding, hyperprolactinemia (increased production and reduced clearance in CKD), and menopause . Studies show that approximately 80% of women with CKD report sexual dysfunction, and up to 40% experience menstrual abnormalities . The degree of impairment in the hypothalamic gonadal axis is correlated with the severity of the CKD stage, emphasizing the importance of family planning at earlier stages of CKD . For women with advanced CKD, the timing of conception is a critical factor influencing fertility. Fertility rates are notably higher in those who conceive before dialysis initiation, likely due to the hormonal and physiological disruptions associated with dialysis treatment . Additionally, the reproductive lifespan of women with CKD has been found to be approximately 32 years, significantly shorter than the general population’s average of 37 years . This shortened reproductive lifespan is a critical sex-specific factor that is associated with a higher future kidney and cardiovascular risk . Sexual dysfunction in women with CKD also impacts psychosocial health, contributing to anxiety, loss of self-confidence, and depression and has long term impacts on cardiovascular disease and mineral bone disorder . Therefore, nephrologists must understand the pathophysiology, clinical manifestations, and treatment of sexual dysfunction, collaborating closely with obstetric gynecologists to enhance awareness and improve the quality of life for these patients. Contraceptive counseling is an important aspect of care for this population. Women with kidney disease have risk factors such as hypertension, diabetes, and thromboembolic disease that require careful consideration of the choice of contraceptive use due to the inherent risk of blood clots and hypertension with some contraceptives. Despite the complexity surrounding pregnancy planning among women with CKD, proactive reproductive health discussions-including contraception counseling- are often overlooked. Less than a third of nephrologists report discussing menstrual irregularities and fertility with their patients, despite half acknowledging that their female patients desire these discussions . Women with CKD contemplating pregnancy report frequently feeling ill-equipped to make informed decisions about pregnancy, often due to limited guidance from their healthcare providers about the impacts of CKD on reproductive health . Patients report increased confidence to proceed with a pregnancy when supported by their nephrologist and when care is coordinated with their primary care providers and obstetricians . Even patients with mild CKD have expressed feelings of loss of autonomy or significant fears related to pregnancy, similar to those with advanced CKD, highlighting the need for proactive discussions at all CKD stages . For some women with impaired fertility and sexual dysfunction, assisted reproductive technologies (ART) may be necessary to achieve pregnancy. The risks associated with ART in CKD are not fully understood at this time due to limited data. In vitro fertilization (IVF) treatments in women with CKD carry the risk of ovarian hyperstimulation syndrome (OHSS), a potentially life-threatening complication that can lead to massive fluid shifts and AKI . Studies report that 7.4% of women with CKD undergoing IVF develop OHSS, a higher rate than in the general IVF population . Severe OHSS in CKD patients can cause AKI through hypovolemia, ureteric obstruction due to ovarian enlargement, or ischemic acute tubular necrosis . Additionally, IVF increases the likelihood of multifetal pregnancies, which independently elevates the risk of adverse pregnancy outcomes in women with CKD . Therefore, early and comprehensive family planning discussions are essential for managing pregnancy-related risks and improving long-term outcomes in women with CKD. Proactive reproductive counseling and coordinated care can empower women make informed decisions as they navigate the complexities of CKD, fertility, and pregnancy. Current practices and guidelines for management of CKD in reproductive age women The management of CKD in reproductive-age women presents unique challenges, especially in areas of reproductive health, contraception counseling, and medication management. Current practices often do not fully address the complex needs of this population, leading to missed opportunities for timely intervention and comprehensive family planning. The KDIGO 2024 guidelines recommend referring adults with CKD to specialist kidney care in cases of advanced CKD (eGFR < 60), rapidly declining kidney function, significant albuminuria (> 300mg/g), refractory hypertension, or need for renal replacement therapy . Additional guidance suggests referral for patients with 3–5% risk according to validated risk tool, an absolute GFR < 30, or a urine albumin creatinine ratio > 300mg/g . While KDIGO personalized approaches consider age, sex, and gender, there are currently no specific recommendations for young women of reproductive age who may benefit from early nephrology consultation. Current risk prediction models, focused on identifying kidney failure risk over 2–5 years in patients with eGFR < 60 ml/min/m2, are less effective for early CKD stages and do not account for the impact of pregnancy on CKD progression. The American Heart Association has recognized hypertension in pregnancy as a risk factor for future cardiovascular disease and stroke . However, clinical guidelines have not yet addressed the role of reproductive risks in future kidney health. A history of pregnancy outcomes and complications in women with kidney disease should be collected systematically by all nephrologists to increase our understanding of the interplay of kidney health and pregnancy and to inform future guidelines. Contraception counseling is crucial for preventing unintended pregnancies in women with CKD, as the risks of maternal and fetal complications increase when kidney disease and related comorbidities are poorly controlled. Despite recommendations, contraception use remains low in women with kidney disease, and few nephrologists discuss fertility and contraception with their patients . Many women report frustration with their lack of knowledge about reproductive health, delays in receiving information, and lack of discussions regarding contraception . Nephrologists often report low confidence in initiating and supporting these conversations, citing limited training and time constraints, which leads to missed opportunities for early and safe pregnancy planning . Patients with advanced CKD especially require intensive counseling, coordination of care, and individualized management. Despite reduced fertility in advanced CKD, conception remains possible at all stages of CKD. Many patients with advanced CKD are incorrectly advised that they are infertile, leading to an increased risk of unplanned and high-risk pregnancies . Due to improved fertility and outcomes with kidney transplantation compared to advanced CKD and dialysis, women are often advised to delay conception for 1–2 years post-transplantation and be informed about the benefits and risks of immunosuppressive agents . However, with transplant wait times often extending 5–10 years, women may face delayed conception into advanced maternal age, which is associated with risks for both mother and fetus. Effective pre-pregnancy counseling should also include screening for fetotoxic medications, maintaining well-controlled blood pressure, and establishment of timeline for close monitoring. Guideline-recommended anti-proteinuric agents such SGLT2i, RAASi, Mineralocorticoid Receptor Antagonists (MRAs), and Glucagon-like peptide-1 (GLP1) agonists are all contraindicated during pregnancy due to teratogenicity, yet guidelines lack specific timing recommendations for discontinuation and reinitiation post pregnancy . This lack of guidance in combination with inadequate counseling on contraceptive use in individuals of childbearing age who are prescribed these teratogenic medications poses a significant risk. Without proper contraceptive planning, there is an increased risk of unplanned pregnancies, potentially leading to adverse fetal outcomes. Standardizing pre-pregnancy counseling on medication safety and providing accessible resources on medication risks with early involvement of nephrologists could help mitigate these issues. Family planning for women with CKD is complex, as sociocultural pressures and personal desires to conceive may conflict with concerns about birth abnormalities, serious medical risks and perceived burden on family . A well-coordinated multidisciplinary team – including nephrologists, PCPs, and obstetricians—is essential to support informed decisions, reduce unplanned pregnancies, and provide comprehensive prenatal care. Nephrologists can focus on CKD progression and medication adjustments, obstetricians on pregnancy-specific risks, and primary care providers on broader contraceptive and health education, creating a supportive network for optimal patient outcomes. Recommendations for care and importance of early referral Managing CKD in reproductive-age women requires a proactive, multidisciplinary approach to reduce the risks associated with pregnancy. Early nephrology referral, comprehensive family planning, and personalized reproductive counseling are essential to ensure optimal outcomes (Tables and ). Early nephrology referral and monitoring Timely referral to nephrology is critical, as pregnancy-related risks in CKD patients are considerably lower in those with well-preserved kidney function, minimal proteinuria, controlled blood pressure, and underlying disease remission. Early nephrology involvement enables close monitoring of kidney function, blood pressure, and proteinuria, which are key indicators of potential pregnancy complications. By addressing these modifiable risk factors, such as hypertension, diabetes, obesity, and proteinuria, providers can help mitigate disease progression and improve maternal and fetal outcomes. PCPs play a fundamental role in early identification of CKD and in initiation of discussions about reproductive health. PCPs are in a position to detect early CKD sin reproductive-age women and making them vital in setting the stage for early nephrology referral PCPs should work collaboratively with nephrologists, who can provide specialized guidance and tailor care based on CKD stage and individual comorbidities. While specific recommendations among women of reproductive age are scarce, early referral to nephrology among all patients has been shown to improve long-term outcomes, mainly when interventions to prevent disease progression are initiated at higher eGFR . Early nephrology involvement is especially important for pre-pregnancy planning and ongoing monitoring, as some treatments are contraindicated during pregnancy. Nephrology engagement supports comprehensive care, facilitating medication review, kidney function assessment, and discussions on the impact of CKD on fertility and pregnancy. Close nephrology monitoring throughout pregnancy and postpartum ensures timely re-initiation of disease-modifying medications post-delivery, guided by individual risk assessments. This proactive approach helps patients make informed family planning decisions and mitigates the risk of adverse pregnancy outcomes. The increasing incidence of kidney disease, coupled with a shortage of nephrologists, prevents all patients with CKD from being seen by a specialist. The outcomes for biological females of reproductive are generally favorable for the early stages of CKD compared to advanced CKD. We recognize that overtreatment can lead to unnecessary stress and interventions. However, given the complexity of family planning, the young age of the population, and the availability of an arsenal of kidney protective agents, the earlier stages of CKD present a larger opportunity to mitigate pregnancy-related adverse outcomes and CKD progression. Individualized family planning and reproductive counseling for women with CKD Family planning should begin early in the management of women with CKD. Comprehensive history taking including obstetric and reproductive history, is crucial for accurate risk assessment to inform potential pregnancy complications and CKD progression. This assessment lays the groundwork for initiating individualized discussions about contraception and pregnancy planning. Discussions should cover the impact of CKD on fertility, noting that while fertility decreases as CKD progresses, pregnancy remains possible even at advanced CKD stages. Patients should be informed about the heightened maternal and fetal risks, including the significant risk of pregnancy association progression of kidney disease and the possibility of dialysis initiation so that they can make well-informed decisions. Additionally, counseling should address the possibility of improved fertility and pregnancy outcomes following kidney transplantation . Contraceptive options should be thoroughly discussed, with education about the risks and benefits of various methods in the context of CKD. These discussions should be provided by both PCPs and nephrologists, ideally in coordination with obstetrician-gynecology. For best outcomes, these discussions start at early stages of CKD, before disease progression limits reproductive choices. Women with CKD require close monitoring and optimization of blood pressure and kidney function before planning a pregnancy, regardless of their disease stage. Assessing kidney function during pregnancy is challenging due to physiological changes like increased hyperfiltration, renal blood flow, and volume distribution, which can complicate CKD staging. In this scenario, frequent lab work, and a 24-h urine collection may be necessary for eGFR assessment, making regular visits with a nephrologist trained in these nuances essential for high quality care. Medication management during pregnancy is crucial, as discontinuing fetotoxic medications, such as antiproteinuric agents, carries a risk of CKD progression and requires close nephrology oversight. Counseling should provide clear guidance on timing of medication discontinuation and include monitoring plans for kidney function and proteinuria. Current guidelines recommend avoiding anti-proteinuric agents in pregnant women or those planning a pregnancy. For patients on ACEi, discontinuation is generally advised once conception is planned or when pregnancy is confirmed in patients with high-risk proteinuric kidney disease . Due to limited data on first-trimester exposure to SGLT2i, MRAs, ARB, and GLP1 agonists, and some reports suggesting potential fetal risks, these medications should also be discontinued during conception planning and avoided during pregnancy . Blood pressure management in pregnancy requires careful balancing to optimize maternal and fetal outcomes. Aggressive treatment should be avoided to reduce the risk of decreased uteroplacental flow and potential growth restriction. Although current recommended blood pressure threshold for antihypertensive initiation is 140/90, the optimal blood pressure target for pregnancy remains unknown . Given the increased risk of preeclampsia in CKD, low-dose aspirin is recommended for all women with CKD between 12 and 28 weeks of gestation . Differentiating preeclampsia from CKD progression can be challenges due to overlapping pathophysiology and symptoms. Rapid increases in proteinuria can occur due to withdrawal of antiproteinuric medications and pregnancy related hyperfiltration, complicating diagnosis. Specific markers, such as elevated liver function tests, thrombocytopenia, FDA-approved ratio of soluble Fms-like tyrosine kinase 1 (s-Flt-1) over placental growth factor (PIGF) can aid in distinguishing preeclampsia from CKD progression . Proposed strategies for optimizing care Establishing multidisciplinary team for comprehensive care Multidisciplinary teams (MDT) composed of nephrologists, obstetricians, PCPs, and other specialists is essential to provide comprehensive care for women of reproductive age with CKD and improve outcomes. MDTs ensures comprehensive care is delivered by integrating reproductive health and CKD management, developing personalized care strategies, and leveraging diverse expertise to manage complex cases. They facilitate early detection and intervention, offer education and counseling on reproductive health, and address the specific needs of women with CKD, aligning with their desire for individualized, multidisciplinary care. Establishing dedicated obstetric nephrology clinics provides an environment where nephrologists can closely collaborate with obstetricians to manage the care of women with CKD. In these clinics, teams can provide continuous care, coordinate medication management, and address complications arising during preconception, pregnancy, and postpartum. This setup supports quality care, enhances expertise in complex CKD cases, and strengthens communication between specialists. Enhanced training and education for primary care and nephrologists Specialized training for nephrologists on reproductive health in CKD patients will improve the quality of reproductive counseling and family planning. Nephrology fellowship programs and continuing medical education should include modules on topics such as fertility counseling, pregnancy-related CKD risks, contraceptive options, and safe medication practices. Training in these areas will equip nephrologists in proactive discussions y to address the specific reproductive health needs of women with CKD confidently. Efforts should be made to educate PCPs on the impact of CKD on fertility, pregnancy outcomes, and the importance of early referral. PCPs should understand the basic pathophysiology and progression of CKD, as well as the associated reproductive risks across CKD stages. They should be prepared to initiate early and ongoing contraceptive counseling and recognize when timely referrals to obstetrics and nephrology are needed, especially for patients planning to conceive or at risk of unplanned pregnancies. Developing clear guidelines for reproductive health in CKD Standardized, evidence-based guidelines on reproductive counseling, family planning, and pregnancy management for CKD patients would help improve care consistency and provider confidence. Clear protocols are needed for contraceptive counseling, timing for discontinuing teratogenic medications, and risk management during pregnancy. Additionally, guidelines should address early nephrology referral criteria for young women with CKD, especially those planning pregnancy or requiring specialized reproductive health support. These tailored protocols would provide a structured framework for PCPs, nephrologists, and OB-GYNs to deliver coordinated and consistent care across settings. Examples for early referral guidelines exist internationally; for instance the National Institute for Health and Care Excellence(NICE) Guidelines for CKD recommend referral for young people to a specialist with any decrease of eGFR or a persistent ACR of 3mg/mmol or more . These guidelines could serve as a valuable model for developing similar standards for childbearing women with CKD. Globally, CKD is more prevalent in women than men, and women younger than 45 have higher all-cause and non-cardiovascular mortality compared to men in the same age group . This increased mortality reproductive-age females is not fully understood, though it may be linked to the impact of CKD on conditions related to reproduction. Current estimates suggest that CKD affects up 6% of women of childbearing age in high-income countries and around 3% of pregnancies are impacted by CKD, although these figures likely underrepresent the true prevalence due to challenges in CKD diagnosis during pregnancy . Diagnosis of CKD in pregnancy remains challenging due to inconsistencies in CKD definitions, single laboratory measurements, non-validated estimated glomerular filtration rates (eGFRs), and insufficient proteinuria data. A large Canadian population study reported a higher CKD rate based on pre-pregnancy eGFR, with 7.5% of pregnancies with mild CKD (eGFR 60–90 ml/min/1.73m 2 ) . However, two-thirds of participants were excluded due to missing or invalid baseline creatinine measures, underscoring the need for improved screening and diagnosis of CKD . As the prevalence of CKD rises, so does the proportion of women of childbearing age with CKD, attributable to increasing rates of risk factors such as obesity, hypertension, and diabetes. Yet data from the 2023 USRDS annual report indicate that only 19.1% of patients aged 18–39 with CKD Stage 3 are receiving nephrology care, a significant portion of whom are women . Notably, this statistic overlooks earlier stages of CKD, where reproductive consequences may be less prominent but still significant. Care disparities in CKD disproportionately affect women, complicating timely diagnosis and management. Data from the United States National Health and Nutritional Examination Survey (NHANES) indicate that CKD awareness is approximately 10.2% lower in women aged 20–49 than in men, which is concerning given the significant implications of CKD on maternal and fetal health . Additional recent studies highlight that women are less likely to be referred to nephrologists and often receive less intensive CKD management than men . A recent Stockholm study demonstrated that women with CKD are less frequently diagnosed, monitored, referred to nephrologists, and prescribed antiproteinuric medications compared to men . Among individuals with diabetes and hypertension, women undergo fewer albuminuria measurements than men, and even when meeting referral criteria, they are less likely to visit a nephrologist within 18 months . Men are also more likely to be referred to a nephrologist at higher eGFR and receive a CKD diagnosis sooner than women . Women are also less likely to start Renin–angiotensin–aldosterone system inhibitors (RAASi) and are more prone to receive potentially inappropriate nephrotoxic medications . These disparities may be influenced by diagnostic limitations, as the use of serum creatinine rather than eGFR for CKD diagnosis can prevent early detection in women, whose lower baseline serum creatinine levels may obscure early signs of CKD. Additionally, prescriber caution regarding the use of RAAS inhibitors and other anti-proteinuric agents such as sodium-glucose cotransporter 2 inhibitors (SGLT2i) in reproductive-age women due to concerns about teratogenicity may also be contributing. Sociocultural factors may add further complexity as women’s prioritization of prioritize family health over personal health can lead to neglected CKD management . These disparities are particularly pronounced in low socioeconomic areas and lower-middle-income countries, where access to comprehensive diagnostic tools, education, and regular monitoring is often limited . Pregnancy introduces further challenges in the diagnosis and management of CKD. Physiological changes during pregnancy, including fluctuations in glomerular filtration rate and proteinuria levels, can complicate accurate diagnosis and monitoring of CKD during this period . Current eGFR equations may also underestimate CKD severity in pregnant individuals, creating potential barriers to appropriate care . This challenge is compounded by pregnancy specific complications, such as hypertensive disorders of pregnancy which may not only exacerbate underlying CKD but can also lead to acute kidney injury (AKI) during pregnancy, with long-term ramifications for kidney health . Individuals with CKD face a greater risk for AKI due to pregnancy complications such as hemorrhage, hyperemesis gravidarum, sepsis, thrombotic microangiopathies, autoimmune disease flares, and obstructive uropathy . Moreover, gestational diabetes or severe metabolic dysfunction during pregnancy can further increase the risk of CKD progression among women with CKD . In high-income countries. advanced maternal age(women over 35 years old) giving birth has become more common and is associated with a range of adverse pregnancy outcomes, including miscarriage, pre-eclampsia, and gestational diabetes . Though the risks may be small in magnitude, they can be compounded among women with CKD, who often have multiple other risk factors for adverse pregnancy outcomes. Recognizing and addressing these risk factors with targeted, pregnancy specific interventions, especially through early nephrology care, is essential to improve kidney outcomes in this vulnerable population. Pregnancy poses significant challenges for woman with kidney disease due to the complex bidirectional interactions between kidney disease and pregnancy. Women with CKD are at risk for adverse pregnancy outcomes, which include AKI, worsening of proteinuria, and progression of underlying CKD . They are also at increased risk of hypertensive disorders of pregnancy, particularly preeclampsia, which is associated with AKI, accelerated CKD progression, and end-stage kidney disease . Recent data from United Kingdom reveal that 46% of pregnancies in women with CKD stages 3–5 had kidney disease progression -defined as at least 25% reduction in eGFR or the need for renal replacement therapy – within one year postpartum . Additionally, pregnancies in women with CKD stages 3–5 has been shown to shorten the time to renal replacement therapy by 2.5–4.7 years . Given these risks, individuals with advanced CKD should be counseled about the potential irreversible loss of kidney function during pregnancy, which can be severe enough to necessitate the initiation of dialysis. In addition to maternal risks, there is also a significant increase in adverse fetal outcomes such as small gestational age, neonatal intensive care unit admissions, intrauterine growth restriction, and even fetal demise . Maternal and fetal risks vary considerably across CKD stages and are exacerbated by the presence of comorbidities. Studies show that maternal and fetal risks are present even in earlier CKD stages, though generally to a lesser extent than in advanced stages . For instance, worsening hypertension, increased proteinuria, and preeclampsia can develop in up to one-third of pregnant women with mild CKD . Prematurity (birth before 37 weeks), low birth weight, and fetal demise occur at slightly higher rates in women with mild CKD compared those without kidney disease . Comorbidities further elevate these risks, particularly diabetes, chronic hypertension, and autoimmune disorders which can significantly affect maternal and fetal outcomes if not well controlled before pregnancy . Pregnancy-related AKI has a significant impact on maternal and fetal outcomes . During the postpartum period, hemorrhage, infections, antibiotics, and nonsteroidal anti-inflammatory drugs, can all increase the risk of AKI . To manage these risks effectively, patients require thorough education on the potential complications and need for close monitoring and physicians’ guidance in selecting the safest timing for pregnancy. Given the elevated risk of adverse pregnancy outcomes (APOs) across all CKD stages, a multidisciplinary approach that includes early involvement of a nephrologist is essential to optimize outcomes for both mother and child. Pregnancy planning is a critical aspect of care for women with CKD of reproductive age, given the significant impact of kidney disease on fertility and overall reproductive health. CKD often leads to sexual dysfunction and decreased fertility, stemming from both hormonal imbalances and physiological changes. While the underlying causes are only partially understood, they include reduced libido, dyspareunia, and disruptions in the hypothalamic gonadal axis . The specific effects of CKD on the axis include impaired ovulation (menstrual cycle disruptions, anovulation, and hypoestrogenism), dysfunctional uterine bleeding, hyperprolactinemia (increased production and reduced clearance in CKD), and menopause . Studies show that approximately 80% of women with CKD report sexual dysfunction, and up to 40% experience menstrual abnormalities . The degree of impairment in the hypothalamic gonadal axis is correlated with the severity of the CKD stage, emphasizing the importance of family planning at earlier stages of CKD . For women with advanced CKD, the timing of conception is a critical factor influencing fertility. Fertility rates are notably higher in those who conceive before dialysis initiation, likely due to the hormonal and physiological disruptions associated with dialysis treatment . Additionally, the reproductive lifespan of women with CKD has been found to be approximately 32 years, significantly shorter than the general population’s average of 37 years . This shortened reproductive lifespan is a critical sex-specific factor that is associated with a higher future kidney and cardiovascular risk . Sexual dysfunction in women with CKD also impacts psychosocial health, contributing to anxiety, loss of self-confidence, and depression and has long term impacts on cardiovascular disease and mineral bone disorder . Therefore, nephrologists must understand the pathophysiology, clinical manifestations, and treatment of sexual dysfunction, collaborating closely with obstetric gynecologists to enhance awareness and improve the quality of life for these patients. Contraceptive counseling is an important aspect of care for this population. Women with kidney disease have risk factors such as hypertension, diabetes, and thromboembolic disease that require careful consideration of the choice of contraceptive use due to the inherent risk of blood clots and hypertension with some contraceptives. Despite the complexity surrounding pregnancy planning among women with CKD, proactive reproductive health discussions-including contraception counseling- are often overlooked. Less than a third of nephrologists report discussing menstrual irregularities and fertility with their patients, despite half acknowledging that their female patients desire these discussions . Women with CKD contemplating pregnancy report frequently feeling ill-equipped to make informed decisions about pregnancy, often due to limited guidance from their healthcare providers about the impacts of CKD on reproductive health . Patients report increased confidence to proceed with a pregnancy when supported by their nephrologist and when care is coordinated with their primary care providers and obstetricians . Even patients with mild CKD have expressed feelings of loss of autonomy or significant fears related to pregnancy, similar to those with advanced CKD, highlighting the need for proactive discussions at all CKD stages . For some women with impaired fertility and sexual dysfunction, assisted reproductive technologies (ART) may be necessary to achieve pregnancy. The risks associated with ART in CKD are not fully understood at this time due to limited data. In vitro fertilization (IVF) treatments in women with CKD carry the risk of ovarian hyperstimulation syndrome (OHSS), a potentially life-threatening complication that can lead to massive fluid shifts and AKI . Studies report that 7.4% of women with CKD undergoing IVF develop OHSS, a higher rate than in the general IVF population . Severe OHSS in CKD patients can cause AKI through hypovolemia, ureteric obstruction due to ovarian enlargement, or ischemic acute tubular necrosis . Additionally, IVF increases the likelihood of multifetal pregnancies, which independently elevates the risk of adverse pregnancy outcomes in women with CKD . Therefore, early and comprehensive family planning discussions are essential for managing pregnancy-related risks and improving long-term outcomes in women with CKD. Proactive reproductive counseling and coordinated care can empower women make informed decisions as they navigate the complexities of CKD, fertility, and pregnancy. The management of CKD in reproductive-age women presents unique challenges, especially in areas of reproductive health, contraception counseling, and medication management. Current practices often do not fully address the complex needs of this population, leading to missed opportunities for timely intervention and comprehensive family planning. The KDIGO 2024 guidelines recommend referring adults with CKD to specialist kidney care in cases of advanced CKD (eGFR < 60), rapidly declining kidney function, significant albuminuria (> 300mg/g), refractory hypertension, or need for renal replacement therapy . Additional guidance suggests referral for patients with 3–5% risk according to validated risk tool, an absolute GFR < 30, or a urine albumin creatinine ratio > 300mg/g . While KDIGO personalized approaches consider age, sex, and gender, there are currently no specific recommendations for young women of reproductive age who may benefit from early nephrology consultation. Current risk prediction models, focused on identifying kidney failure risk over 2–5 years in patients with eGFR < 60 ml/min/m2, are less effective for early CKD stages and do not account for the impact of pregnancy on CKD progression. The American Heart Association has recognized hypertension in pregnancy as a risk factor for future cardiovascular disease and stroke . However, clinical guidelines have not yet addressed the role of reproductive risks in future kidney health. A history of pregnancy outcomes and complications in women with kidney disease should be collected systematically by all nephrologists to increase our understanding of the interplay of kidney health and pregnancy and to inform future guidelines. Contraception counseling is crucial for preventing unintended pregnancies in women with CKD, as the risks of maternal and fetal complications increase when kidney disease and related comorbidities are poorly controlled. Despite recommendations, contraception use remains low in women with kidney disease, and few nephrologists discuss fertility and contraception with their patients . Many women report frustration with their lack of knowledge about reproductive health, delays in receiving information, and lack of discussions regarding contraception . Nephrologists often report low confidence in initiating and supporting these conversations, citing limited training and time constraints, which leads to missed opportunities for early and safe pregnancy planning . Patients with advanced CKD especially require intensive counseling, coordination of care, and individualized management. Despite reduced fertility in advanced CKD, conception remains possible at all stages of CKD. Many patients with advanced CKD are incorrectly advised that they are infertile, leading to an increased risk of unplanned and high-risk pregnancies . Due to improved fertility and outcomes with kidney transplantation compared to advanced CKD and dialysis, women are often advised to delay conception for 1–2 years post-transplantation and be informed about the benefits and risks of immunosuppressive agents . However, with transplant wait times often extending 5–10 years, women may face delayed conception into advanced maternal age, which is associated with risks for both mother and fetus. Effective pre-pregnancy counseling should also include screening for fetotoxic medications, maintaining well-controlled blood pressure, and establishment of timeline for close monitoring. Guideline-recommended anti-proteinuric agents such SGLT2i, RAASi, Mineralocorticoid Receptor Antagonists (MRAs), and Glucagon-like peptide-1 (GLP1) agonists are all contraindicated during pregnancy due to teratogenicity, yet guidelines lack specific timing recommendations for discontinuation and reinitiation post pregnancy . This lack of guidance in combination with inadequate counseling on contraceptive use in individuals of childbearing age who are prescribed these teratogenic medications poses a significant risk. Without proper contraceptive planning, there is an increased risk of unplanned pregnancies, potentially leading to adverse fetal outcomes. Standardizing pre-pregnancy counseling on medication safety and providing accessible resources on medication risks with early involvement of nephrologists could help mitigate these issues. Family planning for women with CKD is complex, as sociocultural pressures and personal desires to conceive may conflict with concerns about birth abnormalities, serious medical risks and perceived burden on family . A well-coordinated multidisciplinary team – including nephrologists, PCPs, and obstetricians—is essential to support informed decisions, reduce unplanned pregnancies, and provide comprehensive prenatal care. Nephrologists can focus on CKD progression and medication adjustments, obstetricians on pregnancy-specific risks, and primary care providers on broader contraceptive and health education, creating a supportive network for optimal patient outcomes. Managing CKD in reproductive-age women requires a proactive, multidisciplinary approach to reduce the risks associated with pregnancy. Early nephrology referral, comprehensive family planning, and personalized reproductive counseling are essential to ensure optimal outcomes (Tables and ). Early nephrology referral and monitoring Timely referral to nephrology is critical, as pregnancy-related risks in CKD patients are considerably lower in those with well-preserved kidney function, minimal proteinuria, controlled blood pressure, and underlying disease remission. Early nephrology involvement enables close monitoring of kidney function, blood pressure, and proteinuria, which are key indicators of potential pregnancy complications. By addressing these modifiable risk factors, such as hypertension, diabetes, obesity, and proteinuria, providers can help mitigate disease progression and improve maternal and fetal outcomes. PCPs play a fundamental role in early identification of CKD and in initiation of discussions about reproductive health. PCPs are in a position to detect early CKD sin reproductive-age women and making them vital in setting the stage for early nephrology referral PCPs should work collaboratively with nephrologists, who can provide specialized guidance and tailor care based on CKD stage and individual comorbidities. While specific recommendations among women of reproductive age are scarce, early referral to nephrology among all patients has been shown to improve long-term outcomes, mainly when interventions to prevent disease progression are initiated at higher eGFR . Early nephrology involvement is especially important for pre-pregnancy planning and ongoing monitoring, as some treatments are contraindicated during pregnancy. Nephrology engagement supports comprehensive care, facilitating medication review, kidney function assessment, and discussions on the impact of CKD on fertility and pregnancy. Close nephrology monitoring throughout pregnancy and postpartum ensures timely re-initiation of disease-modifying medications post-delivery, guided by individual risk assessments. This proactive approach helps patients make informed family planning decisions and mitigates the risk of adverse pregnancy outcomes. The increasing incidence of kidney disease, coupled with a shortage of nephrologists, prevents all patients with CKD from being seen by a specialist. The outcomes for biological females of reproductive are generally favorable for the early stages of CKD compared to advanced CKD. We recognize that overtreatment can lead to unnecessary stress and interventions. However, given the complexity of family planning, the young age of the population, and the availability of an arsenal of kidney protective agents, the earlier stages of CKD present a larger opportunity to mitigate pregnancy-related adverse outcomes and CKD progression. Individualized family planning and reproductive counseling for women with CKD Family planning should begin early in the management of women with CKD. Comprehensive history taking including obstetric and reproductive history, is crucial for accurate risk assessment to inform potential pregnancy complications and CKD progression. This assessment lays the groundwork for initiating individualized discussions about contraception and pregnancy planning. Discussions should cover the impact of CKD on fertility, noting that while fertility decreases as CKD progresses, pregnancy remains possible even at advanced CKD stages. Patients should be informed about the heightened maternal and fetal risks, including the significant risk of pregnancy association progression of kidney disease and the possibility of dialysis initiation so that they can make well-informed decisions. Additionally, counseling should address the possibility of improved fertility and pregnancy outcomes following kidney transplantation . Contraceptive options should be thoroughly discussed, with education about the risks and benefits of various methods in the context of CKD. These discussions should be provided by both PCPs and nephrologists, ideally in coordination with obstetrician-gynecology. For best outcomes, these discussions start at early stages of CKD, before disease progression limits reproductive choices. Women with CKD require close monitoring and optimization of blood pressure and kidney function before planning a pregnancy, regardless of their disease stage. Assessing kidney function during pregnancy is challenging due to physiological changes like increased hyperfiltration, renal blood flow, and volume distribution, which can complicate CKD staging. In this scenario, frequent lab work, and a 24-h urine collection may be necessary for eGFR assessment, making regular visits with a nephrologist trained in these nuances essential for high quality care. Medication management during pregnancy is crucial, as discontinuing fetotoxic medications, such as antiproteinuric agents, carries a risk of CKD progression and requires close nephrology oversight. Counseling should provide clear guidance on timing of medication discontinuation and include monitoring plans for kidney function and proteinuria. Current guidelines recommend avoiding anti-proteinuric agents in pregnant women or those planning a pregnancy. For patients on ACEi, discontinuation is generally advised once conception is planned or when pregnancy is confirmed in patients with high-risk proteinuric kidney disease . Due to limited data on first-trimester exposure to SGLT2i, MRAs, ARB, and GLP1 agonists, and some reports suggesting potential fetal risks, these medications should also be discontinued during conception planning and avoided during pregnancy . Blood pressure management in pregnancy requires careful balancing to optimize maternal and fetal outcomes. Aggressive treatment should be avoided to reduce the risk of decreased uteroplacental flow and potential growth restriction. Although current recommended blood pressure threshold for antihypertensive initiation is 140/90, the optimal blood pressure target for pregnancy remains unknown . Given the increased risk of preeclampsia in CKD, low-dose aspirin is recommended for all women with CKD between 12 and 28 weeks of gestation . Differentiating preeclampsia from CKD progression can be challenges due to overlapping pathophysiology and symptoms. Rapid increases in proteinuria can occur due to withdrawal of antiproteinuric medications and pregnancy related hyperfiltration, complicating diagnosis. Specific markers, such as elevated liver function tests, thrombocytopenia, FDA-approved ratio of soluble Fms-like tyrosine kinase 1 (s-Flt-1) over placental growth factor (PIGF) can aid in distinguishing preeclampsia from CKD progression . Timely referral to nephrology is critical, as pregnancy-related risks in CKD patients are considerably lower in those with well-preserved kidney function, minimal proteinuria, controlled blood pressure, and underlying disease remission. Early nephrology involvement enables close monitoring of kidney function, blood pressure, and proteinuria, which are key indicators of potential pregnancy complications. By addressing these modifiable risk factors, such as hypertension, diabetes, obesity, and proteinuria, providers can help mitigate disease progression and improve maternal and fetal outcomes. PCPs play a fundamental role in early identification of CKD and in initiation of discussions about reproductive health. PCPs are in a position to detect early CKD sin reproductive-age women and making them vital in setting the stage for early nephrology referral PCPs should work collaboratively with nephrologists, who can provide specialized guidance and tailor care based on CKD stage and individual comorbidities. While specific recommendations among women of reproductive age are scarce, early referral to nephrology among all patients has been shown to improve long-term outcomes, mainly when interventions to prevent disease progression are initiated at higher eGFR . Early nephrology involvement is especially important for pre-pregnancy planning and ongoing monitoring, as some treatments are contraindicated during pregnancy. Nephrology engagement supports comprehensive care, facilitating medication review, kidney function assessment, and discussions on the impact of CKD on fertility and pregnancy. Close nephrology monitoring throughout pregnancy and postpartum ensures timely re-initiation of disease-modifying medications post-delivery, guided by individual risk assessments. This proactive approach helps patients make informed family planning decisions and mitigates the risk of adverse pregnancy outcomes. The increasing incidence of kidney disease, coupled with a shortage of nephrologists, prevents all patients with CKD from being seen by a specialist. The outcomes for biological females of reproductive are generally favorable for the early stages of CKD compared to advanced CKD. We recognize that overtreatment can lead to unnecessary stress and interventions. However, given the complexity of family planning, the young age of the population, and the availability of an arsenal of kidney protective agents, the earlier stages of CKD present a larger opportunity to mitigate pregnancy-related adverse outcomes and CKD progression. Family planning should begin early in the management of women with CKD. Comprehensive history taking including obstetric and reproductive history, is crucial for accurate risk assessment to inform potential pregnancy complications and CKD progression. This assessment lays the groundwork for initiating individualized discussions about contraception and pregnancy planning. Discussions should cover the impact of CKD on fertility, noting that while fertility decreases as CKD progresses, pregnancy remains possible even at advanced CKD stages. Patients should be informed about the heightened maternal and fetal risks, including the significant risk of pregnancy association progression of kidney disease and the possibility of dialysis initiation so that they can make well-informed decisions. Additionally, counseling should address the possibility of improved fertility and pregnancy outcomes following kidney transplantation . Contraceptive options should be thoroughly discussed, with education about the risks and benefits of various methods in the context of CKD. These discussions should be provided by both PCPs and nephrologists, ideally in coordination with obstetrician-gynecology. For best outcomes, these discussions start at early stages of CKD, before disease progression limits reproductive choices. Women with CKD require close monitoring and optimization of blood pressure and kidney function before planning a pregnancy, regardless of their disease stage. Assessing kidney function during pregnancy is challenging due to physiological changes like increased hyperfiltration, renal blood flow, and volume distribution, which can complicate CKD staging. In this scenario, frequent lab work, and a 24-h urine collection may be necessary for eGFR assessment, making regular visits with a nephrologist trained in these nuances essential for high quality care. Medication management during pregnancy is crucial, as discontinuing fetotoxic medications, such as antiproteinuric agents, carries a risk of CKD progression and requires close nephrology oversight. Counseling should provide clear guidance on timing of medication discontinuation and include monitoring plans for kidney function and proteinuria. Current guidelines recommend avoiding anti-proteinuric agents in pregnant women or those planning a pregnancy. For patients on ACEi, discontinuation is generally advised once conception is planned or when pregnancy is confirmed in patients with high-risk proteinuric kidney disease . Due to limited data on first-trimester exposure to SGLT2i, MRAs, ARB, and GLP1 agonists, and some reports suggesting potential fetal risks, these medications should also be discontinued during conception planning and avoided during pregnancy . Blood pressure management in pregnancy requires careful balancing to optimize maternal and fetal outcomes. Aggressive treatment should be avoided to reduce the risk of decreased uteroplacental flow and potential growth restriction. Although current recommended blood pressure threshold for antihypertensive initiation is 140/90, the optimal blood pressure target for pregnancy remains unknown . Given the increased risk of preeclampsia in CKD, low-dose aspirin is recommended for all women with CKD between 12 and 28 weeks of gestation . Differentiating preeclampsia from CKD progression can be challenges due to overlapping pathophysiology and symptoms. Rapid increases in proteinuria can occur due to withdrawal of antiproteinuric medications and pregnancy related hyperfiltration, complicating diagnosis. Specific markers, such as elevated liver function tests, thrombocytopenia, FDA-approved ratio of soluble Fms-like tyrosine kinase 1 (s-Flt-1) over placental growth factor (PIGF) can aid in distinguishing preeclampsia from CKD progression . Establishing multidisciplinary team for comprehensive care Multidisciplinary teams (MDT) composed of nephrologists, obstetricians, PCPs, and other specialists is essential to provide comprehensive care for women of reproductive age with CKD and improve outcomes. MDTs ensures comprehensive care is delivered by integrating reproductive health and CKD management, developing personalized care strategies, and leveraging diverse expertise to manage complex cases. They facilitate early detection and intervention, offer education and counseling on reproductive health, and address the specific needs of women with CKD, aligning with their desire for individualized, multidisciplinary care. Establishing dedicated obstetric nephrology clinics provides an environment where nephrologists can closely collaborate with obstetricians to manage the care of women with CKD. In these clinics, teams can provide continuous care, coordinate medication management, and address complications arising during preconception, pregnancy, and postpartum. This setup supports quality care, enhances expertise in complex CKD cases, and strengthens communication between specialists. Enhanced training and education for primary care and nephrologists Specialized training for nephrologists on reproductive health in CKD patients will improve the quality of reproductive counseling and family planning. Nephrology fellowship programs and continuing medical education should include modules on topics such as fertility counseling, pregnancy-related CKD risks, contraceptive options, and safe medication practices. Training in these areas will equip nephrologists in proactive discussions y to address the specific reproductive health needs of women with CKD confidently. Efforts should be made to educate PCPs on the impact of CKD on fertility, pregnancy outcomes, and the importance of early referral. PCPs should understand the basic pathophysiology and progression of CKD, as well as the associated reproductive risks across CKD stages. They should be prepared to initiate early and ongoing contraceptive counseling and recognize when timely referrals to obstetrics and nephrology are needed, especially for patients planning to conceive or at risk of unplanned pregnancies. Developing clear guidelines for reproductive health in CKD Standardized, evidence-based guidelines on reproductive counseling, family planning, and pregnancy management for CKD patients would help improve care consistency and provider confidence. Clear protocols are needed for contraceptive counseling, timing for discontinuing teratogenic medications, and risk management during pregnancy. Additionally, guidelines should address early nephrology referral criteria for young women with CKD, especially those planning pregnancy or requiring specialized reproductive health support. These tailored protocols would provide a structured framework for PCPs, nephrologists, and OB-GYNs to deliver coordinated and consistent care across settings. Examples for early referral guidelines exist internationally; for instance the National Institute for Health and Care Excellence(NICE) Guidelines for CKD recommend referral for young people to a specialist with any decrease of eGFR or a persistent ACR of 3mg/mmol or more . These guidelines could serve as a valuable model for developing similar standards for childbearing women with CKD. Multidisciplinary teams (MDT) composed of nephrologists, obstetricians, PCPs, and other specialists is essential to provide comprehensive care for women of reproductive age with CKD and improve outcomes. MDTs ensures comprehensive care is delivered by integrating reproductive health and CKD management, developing personalized care strategies, and leveraging diverse expertise to manage complex cases. They facilitate early detection and intervention, offer education and counseling on reproductive health, and address the specific needs of women with CKD, aligning with their desire for individualized, multidisciplinary care. Establishing dedicated obstetric nephrology clinics provides an environment where nephrologists can closely collaborate with obstetricians to manage the care of women with CKD. In these clinics, teams can provide continuous care, coordinate medication management, and address complications arising during preconception, pregnancy, and postpartum. This setup supports quality care, enhances expertise in complex CKD cases, and strengthens communication between specialists. Specialized training for nephrologists on reproductive health in CKD patients will improve the quality of reproductive counseling and family planning. Nephrology fellowship programs and continuing medical education should include modules on topics such as fertility counseling, pregnancy-related CKD risks, contraceptive options, and safe medication practices. Training in these areas will equip nephrologists in proactive discussions y to address the specific reproductive health needs of women with CKD confidently. Efforts should be made to educate PCPs on the impact of CKD on fertility, pregnancy outcomes, and the importance of early referral. PCPs should understand the basic pathophysiology and progression of CKD, as well as the associated reproductive risks across CKD stages. They should be prepared to initiate early and ongoing contraceptive counseling and recognize when timely referrals to obstetrics and nephrology are needed, especially for patients planning to conceive or at risk of unplanned pregnancies. Standardized, evidence-based guidelines on reproductive counseling, family planning, and pregnancy management for CKD patients would help improve care consistency and provider confidence. Clear protocols are needed for contraceptive counseling, timing for discontinuing teratogenic medications, and risk management during pregnancy. Additionally, guidelines should address early nephrology referral criteria for young women with CKD, especially those planning pregnancy or requiring specialized reproductive health support. These tailored protocols would provide a structured framework for PCPs, nephrologists, and OB-GYNs to deliver coordinated and consistent care across settings. Examples for early referral guidelines exist internationally; for instance the National Institute for Health and Care Excellence(NICE) Guidelines for CKD recommend referral for young people to a specialist with any decrease of eGFR or a persistent ACR of 3mg/mmol or more . These guidelines could serve as a valuable model for developing similar standards for childbearing women with CKD. The management of CKD in women of reproductive age requires a multifaceted approach. Early referral to nephrology, multidisciplinary collaboration, and tailored clinical guidance are essential to optimize the care and outcomes of this population. By recognizing the importance of preconception counseling, addressing modifiable risk factors, and enhancing awareness amongst healthcare providers, we can improve maternal and fetal health outcomes and reduce the progression of CKD. Further efforts to implement specific recommendations in this population are needed to ensure equitable and effective care for women affected by CKD.
The expression characteristic and prognostic role of Siglec‐15 in lung adenocarcinoma
29e0ca3e-15ed-411d-8323-17b917f1b577
11082535
Anatomy[mh]
INTRODUCTION Lung cancer (LC) is one of the most common cancer and remains the leading cause of cancer‐related death worldwide. In China, the current situation of LC is severely frustrating because the mortality of LC has been increasing by more than 400% over the past three decades. According to the pathological features, the two major types of LC are small cell lung carcinoma (SCLC) and non‐small cell lung carcinoma (NSCLC). Lung adenocarcinoma (LUAD) is the most common subtype of NSCLC and is characterized by high invasiveness, noticeable metastasis, and poor prognosis. For now, molecular targeted therapies have demonstrated significant effectiveness in elongating overall survival (OS) in LUAD patients with positive driver gene mutations. Immunotherapy also showed marvelous outcome for LUAD patients by utilizing immune checkpoint inhibitors (ICIs), including anti(a)‐PD‐1, aPD‐L1, and aCTLA‐4 antibodies. , However, only a part of LUAD patients could respond to targeted therapy and immunotherapy, and drug‐resistance was eventually inevitable. , Hence, the screen and identification of new biomarkers, especially novel checkpoints with clinical potential, are of great importance and urgent need. Sialic acid‐binding immunoglobulin‐like lectins (Siglecs) are a family of sialic acid immunoglobulin receptors that play important roles in recognizing sialylated glycans and in regulating immune homeostasis. Recently, an increasing number of Siglec members have been found to play a crucial role in tumor immunosuppression. Siglec‐15, also known as CD33L3, is a special family member of Siglecs that presents one IgV and one IgC2 domain, demonstrating distinct similarity with B7 family molecules. Siglec‐15 expression is primarily observed in human dendritic cells and macrophages. In tumor microenvironment (TME), tumor cells with high Siglec‐15 expression often demonstrate highly malignant features and behaviors. Lately, Siglec‐15 has been reported to act as an novel immune checkpoint molecule and could be identified as a suitable candidate for cancer immunotherapy. A number of studies describe that Siglec‐15 expression is reciprocally particular to PD‐L1 in many solid tumors, including NSCLC. A research recently stated that Siglec‐15 works individually of the PD‐1/PD‐L1 pathway in TME, implying that suspending Siglec‐15 action may deliver an alternative immune therapy for those patients who failed to respond to initial PD‐1/PD‐L1 therapy. However, whether Siglec‐15 also works oncogenically in LUAD and whether Siglec‐15 could be identified as a valuable biomarker correlating important clinical parameters of LUAD, relative studies are rare. In this study, a number of bioinformatic databases were first consulted. Then we collected LUAD tissue samples to examine the expression of Siglec‐15 expression in both mRNA and protein levels. The relationship between Siglec‐15 expression and clinicopathologic attributes was further explored. The prognostic role of Siglec‐15 in LUAD was finally evaluated. MATERIALS AND METHODS 2.1 Bioinformatic analysis and data retrieval The Human Protein Atlas (HPA) database was examined to explore the skeletal and detailed expression characteristics of Siglec‐15 ( http://www.proteinatlas.org/ ). Gene Expression Profiling Interactive Analysis (GEPIA) database was searched to investigate the expression status of Siglec‐15 in various solid tumors ( http://gepia.cancer-pku.cn/ ). TCGA database was further consulted to confirm the mRNA express of Siglec‐15 ( https://cancergenome.nih.gov ). Kmplot database was employed to detect the prognostic function of Siglec‐15 ( http://kmplot.com/analysis/ ). 2.2 Tissue samples Sixteen fresh LUAD tissue samples and corresponding noncancerous tissue samples were collected from the Department of Thoracic Surgery, The First People's Hospital of Lianyungang from Jan 2020 to Dec 2022. A total of 93 formalin‐fixed, paraffin‐embedded LUAD samples and 89 corresponding noncancerous samples were collected from Outdo Biotech Co., Ltd (Shanghai, China). Important clinicopathological data of LUAD cases were provided from the raw data along with the TMA product. Clinical staging was defined based on the American Joint Committee on Cancer/International Union Against Cancer TNM staging system. Written informed consent was also collected from LC patients enrolled in the present research. Ethical and research protocols were approved by the Human Research Ethics Committee of The Fourth Affiliated Hospital of Nanjing Medical University. 2.3 One‐step qPCR test and western blotting analysis For qPCR test, total RNA was extracted from 16 cases of the frozen LUAD tissue samples using the Trizol reagent following the manufacturer's protocols. The detailed experiment of RNA extraction and qPCR analysis were performed as previously described. For western blotting analysis, total protein was separated and collected from three LUAD tissue samples and transferred onto nitrocellulose membrane. The membranes were first incubated with the polyclonal Siglec‐15 antibody (NBP2‐41162, Novus Biologicals, USA) and then were detected by ECL kit. The detailed protocol was described previously. 2.4 Immunohistochemistry (IHC) analysis IHC analysis was performed as previously described. Tissue sections were incubated with polyclonal rabbit anti‐Siglec‐15 antibody (abcam, ab198684, 1:150) in TBS. Siglec‐15 immunostaining score was examined by two autonomous pathologists on the basis of intensity and percentage of positive staining cells. The detailed protocol was described in our previous studies. , Briefly, the degree of Siglec‐15 staining was defined as follows: Samples with a final score <4 were recognized as low expression while those with a final score ≥4 were determined as high expression. Samples with a final score = 0 were classified as negative expression. 2.5 Statistical analysis All values were showed as the mean ± standard error. The relationships between Siglec‐15 expression and important clinical parameters were analyzed by chi‐square tests. Survival rate was explored by Kaplan–Meier method. Univariate and multivariate analyses were performed by utilizing Cox's proportional hazards regression models to identify and validate prognostic factors. P < 0.05 was considered to indicate a statistically significant difference. All statistical analyses were performed by using STATA 18.0 (Stata Corporation, College Station, TX, USA). Bioinformatic analysis and data retrieval The Human Protein Atlas (HPA) database was examined to explore the skeletal and detailed expression characteristics of Siglec‐15 ( http://www.proteinatlas.org/ ). Gene Expression Profiling Interactive Analysis (GEPIA) database was searched to investigate the expression status of Siglec‐15 in various solid tumors ( http://gepia.cancer-pku.cn/ ). TCGA database was further consulted to confirm the mRNA express of Siglec‐15 ( https://cancergenome.nih.gov ). Kmplot database was employed to detect the prognostic function of Siglec‐15 ( http://kmplot.com/analysis/ ). Tissue samples Sixteen fresh LUAD tissue samples and corresponding noncancerous tissue samples were collected from the Department of Thoracic Surgery, The First People's Hospital of Lianyungang from Jan 2020 to Dec 2022. A total of 93 formalin‐fixed, paraffin‐embedded LUAD samples and 89 corresponding noncancerous samples were collected from Outdo Biotech Co., Ltd (Shanghai, China). Important clinicopathological data of LUAD cases were provided from the raw data along with the TMA product. Clinical staging was defined based on the American Joint Committee on Cancer/International Union Against Cancer TNM staging system. Written informed consent was also collected from LC patients enrolled in the present research. Ethical and research protocols were approved by the Human Research Ethics Committee of The Fourth Affiliated Hospital of Nanjing Medical University. One‐step qPCR test and western blotting analysis For qPCR test, total RNA was extracted from 16 cases of the frozen LUAD tissue samples using the Trizol reagent following the manufacturer's protocols. The detailed experiment of RNA extraction and qPCR analysis were performed as previously described. For western blotting analysis, total protein was separated and collected from three LUAD tissue samples and transferred onto nitrocellulose membrane. The membranes were first incubated with the polyclonal Siglec‐15 antibody (NBP2‐41162, Novus Biologicals, USA) and then were detected by ECL kit. The detailed protocol was described previously. Immunohistochemistry (IHC) analysis IHC analysis was performed as previously described. Tissue sections were incubated with polyclonal rabbit anti‐Siglec‐15 antibody (abcam, ab198684, 1:150) in TBS. Siglec‐15 immunostaining score was examined by two autonomous pathologists on the basis of intensity and percentage of positive staining cells. The detailed protocol was described in our previous studies. , Briefly, the degree of Siglec‐15 staining was defined as follows: Samples with a final score <4 were recognized as low expression while those with a final score ≥4 were determined as high expression. Samples with a final score = 0 were classified as negative expression. Statistical analysis All values were showed as the mean ± standard error. The relationships between Siglec‐15 expression and important clinical parameters were analyzed by chi‐square tests. Survival rate was explored by Kaplan–Meier method. Univariate and multivariate analyses were performed by utilizing Cox's proportional hazards regression models to identify and validate prognostic factors. P < 0.05 was considered to indicate a statistically significant difference. All statistical analyses were performed by using STATA 18.0 (Stata Corporation, College Station, TX, USA). RESULTS 3.1 Bioinformatic summary of Siglec‐15 expression in human tissues The HPA database provided the overview of Siglec‐15 expression in 17 human cancers based on TCGA database, and the data indicated that Siglec‐15 expression was low cancer specificity (Figure and Figure ). The GEPIA database described the expression status of Siglec‐15 in 31 human cancer samples compared with that of in noncancerous samples (Figure ). The data of TCGA database further confirmed that the RNA expression of Siglec‐15 in LUAD tissues was significantly higher than that in corresponding noncancerous tissues (Figure ). 3.2 Bioinformatic outline of Siglec‐15 expression in cancer cells HPA database introduced the Siglec‐15 expression in various cancer cell lines. Particularly, Siglec‐15 expression in brain cancer and thyroid cancer was significantly up‐regulated (Figure ). Then the detailed information of Siglec‐15 expression in LC cell lines was demonstrated in Figure . For single‐cell sequencing level, Siglec‐15 expression was dominantly witnessed in macrophages (Figure ). Figure showed the typical location of Siglec‐15 protein in cell was nucleoplasm. 3.3 Bioinformatic information of the prognostic roles of Siglec‐15 Kmplot database was investigated to examine the prognostic characteristics of Siglec‐15. Figure exhibited that high Siglec‐15 expression suggests poor prognosis in Pan‐cancer circumstance ( P = 0.0299). Moreover, for LC, elevated Siglec‐15 expression also indicated poor prognosis for both progression free survival (PFS, P = 5.7 × 10 −9 ) and overall survival (OS, P = 0.00069) (Figure ). In addition, Figure showed that high Siglec‐15 expression implied favorable overall survival (OS) when treated with PD‐1 ( P = 0.019) or PD‐L1 ( P = 5.5 × 10 −5 ). 3.4 Siglec‐15 expression was up‐regulated in LUAD Sixteen LUAD tissue samples were collected for qPCR test. When normalized to GAPDH, the means of Siglec‐15 mRNA in LUAD and corresponding noncancerous tissues were 2.928 ± 1.41 and 2.019 ± 0.88, respectively ( P = 0.0369). The Siglec‐15 expression averaged 1.45‐fold higher in the LUAD tissues than in noncancerous tissues (Figure ). Three LUAD cases were then subject to western blotting analysis. The results demonstrated that Siglec‐15 protein expression in LUAD tissues was significantly elevated compared with that in noncancerous tissues, which confirmed the data obtained from qPCR test (Figure ). 3.5 Detection of Siglec‐15 protein expression by IHC analysis IHC analysis was executed to examine the protein expression of Siglec‐15 in LUAD. In this cohort, high Siglec‐15 expression was detected in 30 (32.3%) of 93 LUAD tissues compared with 14 (15.7%) of the matched noncancerous tissues. The result showed statistical significance ( P < 0.05) and were in accordance with the data that high Siglec‐15 expression was more frequently observed in LUAD tissues using qPCR and western blotting analyses. Positive staining was largely witnessed in the nuclei of LUAD cells. Representative pictures for Siglec‐15 staining in LUAD tissues and noncancerous tissues are shown in Figures and , respectively. Specifically, Figure demonstrated the Siglec‐15 expression in LUAD tissue samples while Figure displayed the Siglec‐15 expression in noncancerous tissue samples. High Siglec‐15 expression was substantially associated with TNM stage ( P = 0.019) (Table ). 3.6 Survival analysis Univariate analysis was conducted to screen the prognostic element affecting LUAD outcome in this cohort. The results showed that four factors including Siglec‐15 expression ( P = 0.047), N status ( P = 0.002), and TNM stage ( P = 0.001) demonstrated a significant association with the overall survival of LUAD patients. Multivariate analysis confirmed that TNM stage ( P = 0.036) could be considered as an independent prognostic factors in this LUAD cohort (Table ). Kaplan–Meier survival curves showed that LUAD patients with high Siglec‐15 expression, positive N status, and advance TNM stage encountered a remarkably unfavorable overall survival time (Figure ). Bioinformatic summary of Siglec‐15 expression in human tissues The HPA database provided the overview of Siglec‐15 expression in 17 human cancers based on TCGA database, and the data indicated that Siglec‐15 expression was low cancer specificity (Figure and Figure ). The GEPIA database described the expression status of Siglec‐15 in 31 human cancer samples compared with that of in noncancerous samples (Figure ). The data of TCGA database further confirmed that the RNA expression of Siglec‐15 in LUAD tissues was significantly higher than that in corresponding noncancerous tissues (Figure ). Bioinformatic outline of Siglec‐15 expression in cancer cells HPA database introduced the Siglec‐15 expression in various cancer cell lines. Particularly, Siglec‐15 expression in brain cancer and thyroid cancer was significantly up‐regulated (Figure ). Then the detailed information of Siglec‐15 expression in LC cell lines was demonstrated in Figure . For single‐cell sequencing level, Siglec‐15 expression was dominantly witnessed in macrophages (Figure ). Figure showed the typical location of Siglec‐15 protein in cell was nucleoplasm. Bioinformatic information of the prognostic roles of Siglec‐15 Kmplot database was investigated to examine the prognostic characteristics of Siglec‐15. Figure exhibited that high Siglec‐15 expression suggests poor prognosis in Pan‐cancer circumstance ( P = 0.0299). Moreover, for LC, elevated Siglec‐15 expression also indicated poor prognosis for both progression free survival (PFS, P = 5.7 × 10 −9 ) and overall survival (OS, P = 0.00069) (Figure ). In addition, Figure showed that high Siglec‐15 expression implied favorable overall survival (OS) when treated with PD‐1 ( P = 0.019) or PD‐L1 ( P = 5.5 × 10 −5 ). Siglec‐15 expression was up‐regulated in LUAD Sixteen LUAD tissue samples were collected for qPCR test. When normalized to GAPDH, the means of Siglec‐15 mRNA in LUAD and corresponding noncancerous tissues were 2.928 ± 1.41 and 2.019 ± 0.88, respectively ( P = 0.0369). The Siglec‐15 expression averaged 1.45‐fold higher in the LUAD tissues than in noncancerous tissues (Figure ). Three LUAD cases were then subject to western blotting analysis. The results demonstrated that Siglec‐15 protein expression in LUAD tissues was significantly elevated compared with that in noncancerous tissues, which confirmed the data obtained from qPCR test (Figure ). Detection of Siglec‐15 protein expression by IHC analysis IHC analysis was executed to examine the protein expression of Siglec‐15 in LUAD. In this cohort, high Siglec‐15 expression was detected in 30 (32.3%) of 93 LUAD tissues compared with 14 (15.7%) of the matched noncancerous tissues. The result showed statistical significance ( P < 0.05) and were in accordance with the data that high Siglec‐15 expression was more frequently observed in LUAD tissues using qPCR and western blotting analyses. Positive staining was largely witnessed in the nuclei of LUAD cells. Representative pictures for Siglec‐15 staining in LUAD tissues and noncancerous tissues are shown in Figures and , respectively. Specifically, Figure demonstrated the Siglec‐15 expression in LUAD tissue samples while Figure displayed the Siglec‐15 expression in noncancerous tissue samples. High Siglec‐15 expression was substantially associated with TNM stage ( P = 0.019) (Table ). Survival analysis Univariate analysis was conducted to screen the prognostic element affecting LUAD outcome in this cohort. The results showed that four factors including Siglec‐15 expression ( P = 0.047), N status ( P = 0.002), and TNM stage ( P = 0.001) demonstrated a significant association with the overall survival of LUAD patients. Multivariate analysis confirmed that TNM stage ( P = 0.036) could be considered as an independent prognostic factors in this LUAD cohort (Table ). Kaplan–Meier survival curves showed that LUAD patients with high Siglec‐15 expression, positive N status, and advance TNM stage encountered a remarkably unfavorable overall survival time (Figure ). DISCUSSION Previous studies revealed several important characteristics of Siglec‐15. , , For one thing, Siglec‐15 is up‐regulated in tumor cells and macrophage, rather than normal tissues, implying the restricted activity within tissue microenvironment (TME). For another, Siglec‐15 shows dramatic immunosuppression on T cell response and Siglec‐15 inhibition reverses T cell inhibition, suggesting Siglec‐15‐specific antibody may restore tumor immunity and inhibit tumor growth. In addition, the expression of Siglec‐15 pathway is independent of the PD‐L1/PD‐1 pathway, indicating that targeting Siglec‐15 might be selected as an alternative therapeutic choice for those barely respond to anti‐PD‐L1/PD‐1 therapy. , , Bioinformatics analyses were firstly performed to investigate a number of expression features of Siglec‐15 in human cancers. For tissue samples, HPA, GEPIA, and TCGA databases all demonstrated the high Siglec‐15 expression in LC. For cell samples, HPA database described the detailed information of Siglec‐15 expression, including in single cell level that Siglec‐15 expression mainly located in macrophages. Moreover, Kmplot database disclosed various prognostic function of Siglec‐15, for both pan‐cancer and LC scenarios. Particularly, high Siglec‐15 expression also indicated favorable treatment outcome when utilizing ICIs (anti‐PD‐1 or anti‐PD‐L1). A number of previous researches also highlighted the significant attributes and potential of Siglec‐15 in LC. Huang et al. reported that Siglec‐15 positive macrophages (PD‐L1‐independent) facilitated the development of an immunosuppressive TME in LUAD without metastasis, which might be the accomplice element of tumor relapse. Li et al. stated that patients with low CD8A expression/CD8 + T cells infiltration and high Siglec‐15 expression were related to the activation of immunosuppressive signature and metabolism‐related pathway, along with increased infiltration of TAMs. As for underlying mechanism, Zhang et al. revealed that obesity could accelerate immune evasion of non‐small cell lung carcinoma via TFEB‐dependent up‐regulation of Siglec‐15 and glycolytic reprogramming. In this present study, we collected tissue samples to perform qPCR, western blotting, and IHC analyses to further investigate Siglec‐15 expression in LUAD. The qPCR test with 16 LUAD samples showed substantially elevated expression of Siglec‐15 in cancer tissues than that in noncancerous tissues. Western blotting analysis with three LUAD samples validated that the protein level of Siglec‐15 was also up‐regulated in cancer tissues. Then a LUAD cohort containing 93 cases in a TMA was prepared, and the result of IHC analysis verified the expression characteristics of Siglec‐15. Analogously, Shafi et al. reported the increased Siglec‐15 expression in both tumor and immune cells in four types of cancer (lung, breast, head, and neck squamous cell carcinoma and bladder cancer) ; Quirino et al. stated high Siglec‐15 expression could be observed in neoplastic tissues in gastric cancer. Furthermore, high Siglec‐15 protein expression statistically associated with TNM stage, and the data also consistent with the previous studies that showed the carcinogenic roles of Siglec‐15 in human cancer, for instance participating the development and progression of retroperitoneal liposarcoma, promoting immune evasion of acute lymphoblastic leukemia, and facilitating cancer cell migration of hepatoma. In survival analysis, univariate analysis screened several important parameters that dramatically correlated with overall survival of 93 LUAD patients, such as Siglec‐15 expression, N status, and TNM stage. Although Siglec‐15 was failed to be identified as an independent prognostic factor for LUAD prognosis, Kaplan–Meier curve also depicted that LUAD patients with elevated Siglec‐15 expression encountered a crucial disappointing outcome than that of patients with low expression. The results of survival statistics were consistent with a study reported by Jiang et al., which summarized the unfavorable OS implication of Siglec‐15 in solid tumors by performing a meta‐analysis. Interestingly, several studies reported the diverse prognostic characteristics of Siglec‐15. Hao et al. revealed that Siglec‐15 expression was not associated with the prognosis of early NSCLC. Jiang et al. concluded that Siglec‐15 expression demonstrated a dramatically worse OS but favorable DSS simultaneously. In our previous study, a novel Siglec‐15 antibody was prepared and showed encouraging tumor‐inhibitory effectiveness in LUAD by modulating macrophage polarization, suggesting a detrimental factor of Siglec‐15 role in cancer management. Nevertheless, Zhou et al. illustrated that Siglec‐15 was associated with a better pathological response and more favorable survival in ESCC patients receiving neoadjuvant chemoradiotherapy, implying a beneficial element of Siglec‐15 role in the treatment of human cancer. The reason of these inconsistent or even conflicting data may be due to the multifunctional qualities of Siglec‐15. As a immunosuppressive molecule, Siglec‐15 in different expression site or various tumor type could give rise to multifarious activities in TME. Moreover, there are several issues in this present research. For one thing, we did not enroll LUAD cell lines to detect Siglec‐15 expression, nor did we perform a serial of knockdown or rescue experiments. For another, the mechanism of Siglec‐15 function is not fully investigated, and it remains barely known about how Siglec‐15 modulates cellular–cellular communication or signaling pathway in LUAD TME. Further and thorough researches that enroll larger cancer types, explore the cellular crosstalk, and elucidate the potential mechanisms of Siglec‐15 performance are of great significance to prove and deepen our current results. CONCLUSION In all, up‐regulation of Siglec‐15 expression was observed in LUAD and elevated Siglec‐15 expression correlated with TNM stage. High expression of Siglec‐15 implied unfavorable overall survival in LUAD patients. Siglec‐15 might be identified as a novel prognostic biomarker in LUAD, and targeting Siglec‐15 may provide a promising strategy for LUAD immunotherapy. Lin Wang and Yuan Mao designed the study. Haijun Sun, Qilong Du, and Yuyu Xu collected the tissue samples. Haijun Sun and Yuyu Xu performed the PCR and WB experiments. Li Xu and Junrong Yang performed the IHC analysis. Haijun Sun, Qilong Du, and Cheng Rao performed the statistics. Haijun Sun and Qilong Du drafted the manuscript. Lin Wang and Yuan Mao supervised the study. All authors had full access to the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. The authors declare no potential conflicts of interest with respect to the research, authorship, and publication of this article. Written informed consent was obtained from the patients for the publication of this study and the use of any accompanying images. The study protocol was approved by the Ethics Committee of The Fourth Affiliated Hospital of Nanjing Medical University (20230303‐k098). Figure S1. Siglec‐15 protein concentrations in the pan‐cancer cohort ( https://www.proteinatlas.org/ENSG00000197046-SIGLEC15/disease ). Figure S2. Siglec‐15 expression suggests poor prognosis in Pan‐cancer circumstance (P = 0.0299, https://kmplot.com/analysis ).
Developing intervention fidelity strategies for a behaviour change intervention delivered in primary care dental practices: the RETURN fidelity strategy
dec20632-7507-42f2-ba14-d8ce6942773e
11831780
Dentistry[mh]
Monitoring the implementation of behaviour change interventions (BCIs) according to their intended protocols is essential for the accurate interpretation of healthcare trial results . Failure to prevent unintended deviations from BCI protocols increases the risk of methodological errors, leading to uncertainties in the interpretation of results . Specifically, poor monitoring of BCI implementation can result in Type I errors, where trial results falsely indicate an intervention’s effectiveness due to unauthorised additions or omissions of key components. Type II errors occur when a genuine effect is not detected for similar reasons, and Type III errors arise when incorrect conclusions about BCI effectiveness are drawn due to discrepancies between the intended and delivered interventions . To mitigate unintended outcomes in trials testing BCIs, it is crucial to implement strategies that enhance both internal and external validity . This ensures that conclusions drawn are unequivocally attributable to the BCI rather than extraneous factors . The complexity and multi-component nature of BCIs present specific challenges in achieving scientific rigour, as isolating the effect of each component and ensuring consistent implementation across various contexts is difficult . Intervention fidelity, also known as treatment fidelity, is a critical methodological tool in addressing these challenges within randomised controlled trials (RCTs) . One conceptualisation of intervention fidelity tailored to BCIs is the National Institutes of Health Behavior Change Consortium (BCC) treatment fidelity framework . The BCC defines intervention fidelity as “the methodological strategies and practices used to enhance and monitor the reliability and validity of behavioural interventions” . The framework provides a structured set of strategies for researchers to enhance fidelity practices during the development and testing of BCIs, particularly when these interventions are delivered in real-world environments by healthcare professionals . This approach ensures consistent and effective implementation across different settings. The BCC fidelity framework encompasses five domains: design, training, delivery, receipt, and enactment, with each domain offering specific strategies to enhance fidelity within that area . Table provides an overview of how the strategies recommended by the BCC enhance intervention fidelity within each domain. The recent iteration of the Medical Research Council’s (MRC) guidance on the development and evaluation of complex interventions underscores the necessity for a flexible, iterative, and context-dependent approach to ensure that research findings are applicable and beneficial to real-world settings . The framework specifically emphasises the value of transitioning between research phases, allowing for the integration of new insights and the refinement of interventions to optimise outcomes . Feasibility studies play a crucial role in this iterative process, enabling researchers to identify and address potential implementation challenges, assess recruitment and retention rates, and test the practicality of procedures on a smaller scale before advancing to full-scale evaluations . Accordingly, it is essential that the refinement of intervention fidelity strategies is embedded within the outcomes of feasibility studies, with a particular focus on context-dependent factors and the incorporation of relevant stakeholder perspectives . The primary dental care setting poses distinct challenges for implementing BCI trials, and dental trials often lack methodological rigor, resulting in ambiguous findings . Several factors contribute to these challenges. The decentralised nature of primary care dental practices, which often function as independent business entities, complicates the standardisation and coordination necessary across multiple study sites . Additionally, the significant variability in patient populations within dental practices hampers the development of uniform research protocols . The busy and high-demand environment of dental practices also restricts the time available for dental teams to participate in research activities , thereby affecting the feasibility of conducting methodologically rigorous studies. Whilst many of these challenges could apply to other primary care settings (i.e. General Practice), the primary dental care setting within the United Kingdom (UK) is also an untapped research setting suggesting dental personnel are relatively inexperienced in research delivery further adding to the challenges of delivering robust research. To address these issues, enhancing intervention fidelity strategies in trials testing BCIs within primary dental care settings may be a viable solution. However, a recent scoping review revealed that little emphasis has been placed on the development and implementation of robust fidelity strategies in this field to date . Implementing these strategies could potentially improve the reliability and consistency of outcomes in BCI trials conducted in primary dental care settings. Accordingly, this paper aims to describe the development of a comprehensive intervention fidelity strategy for implementation in a RCT (inteRventions to rEduce inequaliTies in the Uptake of Routine deNtal care RCT – the RETURN main trial) which assesses a BCI delivered within primary dental care settings. Drawing upon principles outlined in the BCC framework, the strategy’s development has been informed by insights gathered from the RETURN feasibility study. Ethics Ethical approval was obtained from Bromley Research Ethics Committee (19/LO/1510). Research governance approvals were obtained from the Health Research Authority (reference 265789), and sponsorship was provided by the University of Liverpool (reference UoL001354). All data used were accessed only by authorised study members and were stored in a secure location in accordance with ethical requirements. Procedures The RETURN intervention The RETURN intervention is a single-session, brief psychological intervention delivered by dental nurses in urgent dental care settings. Its primary objective is to support patients who only use dental services when they have an urgent problem. By assisting patients in identifying and overcoming barriers to routine dental visits, the intervention aims to promote regular, planned dental care, thereby improving oral health outcomes. The intervention is multifaceted and comprises several components delivered opportunistically to patients attending an urgent dental appointment. It leverages a “teachable moment” approach . A comprehensive description of the intervention has been detailed elsewhere . Briefly the intervention comprises two elements: A “patient pack” with behaviour change techniques embedded within the materials. The pack comprises: Six booklets addressing common barriers to routine dental visiting (cost, time constraints, not thinking to go when not in pain, distrust of dentists, embarrassment, and anxiety). Corresponding barrier videos featuring dental patients sharing their experiences of overcoming barriers, augmented with engaging animations. These videos were created specifically to resonate with the trial population. A written goal and action plan completed during the intervention session, targeting one barrier. Additional materials and booklets intended for post-appointment use at home, encouraging routine dental visiting. These include practical aids such as breathing exercises for anxious patients, contact information for dental services, and an “employer card” endorsing routine dental attendance. Access to a study website via a personal login where all intervention materials can be viewed. 2) A structured conversation facilitated by trained dental nurses . The conversation guides patients through the intervention process, utilising empathetic listening, non-judgmental, and non-directive dialogue. Its dual purpose is to enhance participant engagement by tailoring discussions to individual experiences whilst ensuring interaction with key intervention components. The RETURN feasibility study The RETURN feasibility study was a parallel group, two-arm, RCT that aimed to recruit 60 patients. Its primary objective was to assess the feasibility of conducting a larger RCT (the RETURN main trial) within urgent dental care settings to evaluate the RETURN intervention. Patients were allocated to either receive no intervention (usual care at the recruiting urgent dental care site) or the RETURN intervention. The study was conducted in Merseyside, North-West England in the UK in three site types: (1) an urgent clinic in a Teaching Dental Hospital (2), an out-of-hours urgent dental care service, and (3) an urgent clinic in an in-hours dental practice. Each site put forward dental nurses to be trained for one hour in Good Clinical Practice principles and for two hours in study procedures and intervention delivery. Training sessions were didactic, with opportunities to role play intervention deliveries and were conducted either at the University of Liverpool or at site. During the recruitment period (January 2020 to March 2020), routine dental care appointments for new National Health Service (NHS) patients were readily available in the region. Recruitment ceased abruptly due to the COVID-19 pandemic, resulting in the enrolment of 28 patients, approximately halfway to the target. Follow-up was achieved for 82% of the patients via telephone, email, or post, four months post-recruitment. Feasibility measures included primary outcome data completion rates, recruitment rates, and fidelity. A comprehensive study description and results have been published elsewhere . Briefly, despite premature termination, the results were considered sufficient to warrant proceeding to a full-scale RCT, with the addition of an internal pilot to monitor progress. Developing the fidelity strategies In alignment with MRC guidance on intervention refinement and Borrelli’s recommendation to pilot test interventions and incorporate feedback from participants and providers, the RETURN feasibility study provided an opportunity to develop fidelity strategies for the RETURN main trial. A total of 58 h of observations were conducted, covering the recruitment of 24 patients and 11 intervention delivery sessions (the remaining 13 patients were allocated to the control arm, and therefore, no intervention delivery was observed). Observation time also encompassed site set-up, informal discussions with dental nurses, and additional ad hoc training conversations throughout the study period. Field notes were recorded at the end of each day. Additionally, telephone semi-structured interviews were conducted with two dental nurses involved in delivering the feasibility study and 17 study patients. All dental nurses who were both trained and delivered the intervention were invited to be interviewed. This meant that numbers of dental nurses participating in interviews were limited because the onset of the COVID pandemic meant a refocusing of the dental workforce on purely clinical activity and so fewer nurses ( n = 2) delivered the intervention than were trained ( n = 9). This also resulted in a researcher recruiting patients at one site instead of dental nurses, but they were not approached to take part in this study. For pragmatic reasons, the 17 patients who responded to the RETURN feasibility study follow-up by telephone were invited to be interviewed ( n = 9 intervention & n = 8 control), all of whom agreed to take part. Interviews were audio-recorded and transcribed. Employing the Framework Method guided by the BCC recommendations and using a deductive approach to structure the coding framework in accordance with the components of the intervention, field notes and interview transcripts were analysed to pinpoint areas where fidelity could be strengthened for the RETURN man trial. Data underwent coding, charting, mapping, and verification across the entire dataset to inform the development of a robust fidelity strategy. In addition to the logic model which sets out the underlying mechanisms of the intervention materials as described in the intervention development publication , and recognising the two separate elements of the intervention (the “patient pack” and conversational element), we have produced an ‘operational model’ , to facilitate a complete assessment of intervention fidelity within the RETURN main trial . This can be found at Table . This operational model provides a scaffold for the intervention fidelity strategy and helped inform its design, by outlining which intervention activities should be present in an intervention conversation for it to be considered delivered as intended. From this model, many of the strategies contained in this manuscript were developed (for example, it guided the components featured in the RETURN fidelity checklist developed to monitor training and assess delivery fidelity, discussed within the ‘delivery’ section below). Ethical approval was obtained from Bromley Research Ethics Committee (19/LO/1510). Research governance approvals were obtained from the Health Research Authority (reference 265789), and sponsorship was provided by the University of Liverpool (reference UoL001354). All data used were accessed only by authorised study members and were stored in a secure location in accordance with ethical requirements. The RETURN intervention The RETURN intervention is a single-session, brief psychological intervention delivered by dental nurses in urgent dental care settings. Its primary objective is to support patients who only use dental services when they have an urgent problem. By assisting patients in identifying and overcoming barriers to routine dental visits, the intervention aims to promote regular, planned dental care, thereby improving oral health outcomes. The intervention is multifaceted and comprises several components delivered opportunistically to patients attending an urgent dental appointment. It leverages a “teachable moment” approach . A comprehensive description of the intervention has been detailed elsewhere . Briefly the intervention comprises two elements: A “patient pack” with behaviour change techniques embedded within the materials. The pack comprises: Six booklets addressing common barriers to routine dental visiting (cost, time constraints, not thinking to go when not in pain, distrust of dentists, embarrassment, and anxiety). Corresponding barrier videos featuring dental patients sharing their experiences of overcoming barriers, augmented with engaging animations. These videos were created specifically to resonate with the trial population. A written goal and action plan completed during the intervention session, targeting one barrier. Additional materials and booklets intended for post-appointment use at home, encouraging routine dental visiting. These include practical aids such as breathing exercises for anxious patients, contact information for dental services, and an “employer card” endorsing routine dental attendance. Access to a study website via a personal login where all intervention materials can be viewed. 2) A structured conversation facilitated by trained dental nurses . The conversation guides patients through the intervention process, utilising empathetic listening, non-judgmental, and non-directive dialogue. Its dual purpose is to enhance participant engagement by tailoring discussions to individual experiences whilst ensuring interaction with key intervention components. The RETURN feasibility study The RETURN feasibility study was a parallel group, two-arm, RCT that aimed to recruit 60 patients. Its primary objective was to assess the feasibility of conducting a larger RCT (the RETURN main trial) within urgent dental care settings to evaluate the RETURN intervention. Patients were allocated to either receive no intervention (usual care at the recruiting urgent dental care site) or the RETURN intervention. The study was conducted in Merseyside, North-West England in the UK in three site types: (1) an urgent clinic in a Teaching Dental Hospital (2), an out-of-hours urgent dental care service, and (3) an urgent clinic in an in-hours dental practice. Each site put forward dental nurses to be trained for one hour in Good Clinical Practice principles and for two hours in study procedures and intervention delivery. Training sessions were didactic, with opportunities to role play intervention deliveries and were conducted either at the University of Liverpool or at site. During the recruitment period (January 2020 to March 2020), routine dental care appointments for new National Health Service (NHS) patients were readily available in the region. Recruitment ceased abruptly due to the COVID-19 pandemic, resulting in the enrolment of 28 patients, approximately halfway to the target. Follow-up was achieved for 82% of the patients via telephone, email, or post, four months post-recruitment. Feasibility measures included primary outcome data completion rates, recruitment rates, and fidelity. A comprehensive study description and results have been published elsewhere . Briefly, despite premature termination, the results were considered sufficient to warrant proceeding to a full-scale RCT, with the addition of an internal pilot to monitor progress. Developing the fidelity strategies In alignment with MRC guidance on intervention refinement and Borrelli’s recommendation to pilot test interventions and incorporate feedback from participants and providers, the RETURN feasibility study provided an opportunity to develop fidelity strategies for the RETURN main trial. A total of 58 h of observations were conducted, covering the recruitment of 24 patients and 11 intervention delivery sessions (the remaining 13 patients were allocated to the control arm, and therefore, no intervention delivery was observed). Observation time also encompassed site set-up, informal discussions with dental nurses, and additional ad hoc training conversations throughout the study period. Field notes were recorded at the end of each day. Additionally, telephone semi-structured interviews were conducted with two dental nurses involved in delivering the feasibility study and 17 study patients. All dental nurses who were both trained and delivered the intervention were invited to be interviewed. This meant that numbers of dental nurses participating in interviews were limited because the onset of the COVID pandemic meant a refocusing of the dental workforce on purely clinical activity and so fewer nurses ( n = 2) delivered the intervention than were trained ( n = 9). This also resulted in a researcher recruiting patients at one site instead of dental nurses, but they were not approached to take part in this study. For pragmatic reasons, the 17 patients who responded to the RETURN feasibility study follow-up by telephone were invited to be interviewed ( n = 9 intervention & n = 8 control), all of whom agreed to take part. Interviews were audio-recorded and transcribed. Employing the Framework Method guided by the BCC recommendations and using a deductive approach to structure the coding framework in accordance with the components of the intervention, field notes and interview transcripts were analysed to pinpoint areas where fidelity could be strengthened for the RETURN man trial. Data underwent coding, charting, mapping, and verification across the entire dataset to inform the development of a robust fidelity strategy. In addition to the logic model which sets out the underlying mechanisms of the intervention materials as described in the intervention development publication , and recognising the two separate elements of the intervention (the “patient pack” and conversational element), we have produced an ‘operational model’ , to facilitate a complete assessment of intervention fidelity within the RETURN main trial . This can be found at Table . This operational model provides a scaffold for the intervention fidelity strategy and helped inform its design, by outlining which intervention activities should be present in an intervention conversation for it to be considered delivered as intended. From this model, many of the strategies contained in this manuscript were developed (for example, it guided the components featured in the RETURN fidelity checklist developed to monitor training and assess delivery fidelity, discussed within the ‘delivery’ section below). The RETURN intervention is a single-session, brief psychological intervention delivered by dental nurses in urgent dental care settings. Its primary objective is to support patients who only use dental services when they have an urgent problem. By assisting patients in identifying and overcoming barriers to routine dental visits, the intervention aims to promote regular, planned dental care, thereby improving oral health outcomes. The intervention is multifaceted and comprises several components delivered opportunistically to patients attending an urgent dental appointment. It leverages a “teachable moment” approach . A comprehensive description of the intervention has been detailed elsewhere . Briefly the intervention comprises two elements: A “patient pack” with behaviour change techniques embedded within the materials. The pack comprises: Six booklets addressing common barriers to routine dental visiting (cost, time constraints, not thinking to go when not in pain, distrust of dentists, embarrassment, and anxiety). Corresponding barrier videos featuring dental patients sharing their experiences of overcoming barriers, augmented with engaging animations. These videos were created specifically to resonate with the trial population. A written goal and action plan completed during the intervention session, targeting one barrier. Additional materials and booklets intended for post-appointment use at home, encouraging routine dental visiting. These include practical aids such as breathing exercises for anxious patients, contact information for dental services, and an “employer card” endorsing routine dental attendance. Access to a study website via a personal login where all intervention materials can be viewed. 2) A structured conversation facilitated by trained dental nurses . The conversation guides patients through the intervention process, utilising empathetic listening, non-judgmental, and non-directive dialogue. Its dual purpose is to enhance participant engagement by tailoring discussions to individual experiences whilst ensuring interaction with key intervention components. The RETURN feasibility study was a parallel group, two-arm, RCT that aimed to recruit 60 patients. Its primary objective was to assess the feasibility of conducting a larger RCT (the RETURN main trial) within urgent dental care settings to evaluate the RETURN intervention. Patients were allocated to either receive no intervention (usual care at the recruiting urgent dental care site) or the RETURN intervention. The study was conducted in Merseyside, North-West England in the UK in three site types: (1) an urgent clinic in a Teaching Dental Hospital (2), an out-of-hours urgent dental care service, and (3) an urgent clinic in an in-hours dental practice. Each site put forward dental nurses to be trained for one hour in Good Clinical Practice principles and for two hours in study procedures and intervention delivery. Training sessions were didactic, with opportunities to role play intervention deliveries and were conducted either at the University of Liverpool or at site. During the recruitment period (January 2020 to March 2020), routine dental care appointments for new National Health Service (NHS) patients were readily available in the region. Recruitment ceased abruptly due to the COVID-19 pandemic, resulting in the enrolment of 28 patients, approximately halfway to the target. Follow-up was achieved for 82% of the patients via telephone, email, or post, four months post-recruitment. Feasibility measures included primary outcome data completion rates, recruitment rates, and fidelity. A comprehensive study description and results have been published elsewhere . Briefly, despite premature termination, the results were considered sufficient to warrant proceeding to a full-scale RCT, with the addition of an internal pilot to monitor progress. In alignment with MRC guidance on intervention refinement and Borrelli’s recommendation to pilot test interventions and incorporate feedback from participants and providers, the RETURN feasibility study provided an opportunity to develop fidelity strategies for the RETURN main trial. A total of 58 h of observations were conducted, covering the recruitment of 24 patients and 11 intervention delivery sessions (the remaining 13 patients were allocated to the control arm, and therefore, no intervention delivery was observed). Observation time also encompassed site set-up, informal discussions with dental nurses, and additional ad hoc training conversations throughout the study period. Field notes were recorded at the end of each day. Additionally, telephone semi-structured interviews were conducted with two dental nurses involved in delivering the feasibility study and 17 study patients. All dental nurses who were both trained and delivered the intervention were invited to be interviewed. This meant that numbers of dental nurses participating in interviews were limited because the onset of the COVID pandemic meant a refocusing of the dental workforce on purely clinical activity and so fewer nurses ( n = 2) delivered the intervention than were trained ( n = 9). This also resulted in a researcher recruiting patients at one site instead of dental nurses, but they were not approached to take part in this study. For pragmatic reasons, the 17 patients who responded to the RETURN feasibility study follow-up by telephone were invited to be interviewed ( n = 9 intervention & n = 8 control), all of whom agreed to take part. Interviews were audio-recorded and transcribed. Employing the Framework Method guided by the BCC recommendations and using a deductive approach to structure the coding framework in accordance with the components of the intervention, field notes and interview transcripts were analysed to pinpoint areas where fidelity could be strengthened for the RETURN man trial. Data underwent coding, charting, mapping, and verification across the entire dataset to inform the development of a robust fidelity strategy. In addition to the logic model which sets out the underlying mechanisms of the intervention materials as described in the intervention development publication , and recognising the two separate elements of the intervention (the “patient pack” and conversational element), we have produced an ‘operational model’ , to facilitate a complete assessment of intervention fidelity within the RETURN main trial . This can be found at Table . This operational model provides a scaffold for the intervention fidelity strategy and helped inform its design, by outlining which intervention activities should be present in an intervention conversation for it to be considered delivered as intended. From this model, many of the strategies contained in this manuscript were developed (for example, it guided the components featured in the RETURN fidelity checklist developed to monitor training and assess delivery fidelity, discussed within the ‘delivery’ section below). The BCC recommendations were used as a guide to develop a comprehensive fidelity strategy for the RETURN trial, addressing the lessons learned from the conduct of the feasibility study. This article now sets out the strategy following the 5 domains ( in bold ) and goals ( in bold italics ) of the BCC framework , adapted by Borrelli . Design Explicitly identify and use a theoretical model as a basis for the intervention and ensure the intervention components and measures are reflective of underlying theory The theoretical underpinnings guiding the intervention have been detailed in a publication outlining the intervention’s development process, which includes a comprehensive logic model . Briefly, the intervention draws from multiple theoretical frameworks, incorporating elements of Protection Motivation Theory and Identity-based Motivation Theory . During the feasibility study, it became evident that the conversational aspect of the intervention required a structured approach to enhance standardisation across intervention sessions. Accordingly, Motivational Interviewing (MI) ‘spirit’ was introduced as a framework to provide structure to these conversations in the RETURN main trial, while bolstering the theoretical coherence of intervention deliveries. Ensure consistent intervention dose and develop a monitoring plan to maintain consistency Variations in intervention ‘dose’ were noted during the feasibility study, with session durations ranging from 10 to 37 min, Mean (Standard Deviation (SD)) = 21 minutes. Observations revealed this was primarily influenced by patient engagement and confidence levels of the interventionist. Whilst an intervention duration target of around 15 min was set for pragmatic reasons as part of the intervention design goals for the feasibility study, for the RETURN main trial, a larger emphasis on dose standardisation will be implemented through training. Additionally, to underscore the importance of regulating dose, specific guidance on the duration for each intervention component for the main trial will be given: Barrier discussion – 4 min. Motivation enhancement: video and discussion – 3 min. Knowledge enhancement (guided discussion using booklet materials) – 3 min. Setting Specific, Measurable, Achievable, Relevant, and Time-bound (SMART) goals and action plan – 4 min. Setting intentions at the session’s conclusion – 1 min. However, as a patient-centred approach is inherent in MI techniques where discussions are led by patients, variations in intervention dose will be deemed an acceptable intervention adaptation in the RETURN main trial. This decision is supported by the understanding that patients facing multiple barriers may have longer ‘barrier discussions’ leading to variations in intervention duration. Monitoring of dose will be achieved through audio-recordings, although no corrective measures will be taken to standardise dose in the main trial. Patients also take intervention materials home, and accordingly, questions about additional engagement with the materials will form part of the RETURN main trial follow-ups. Likewise, metadata from the study website will be reviewed to assess whether patients viewed intervention materials at home. This comprehensive approach to dose monitoring aims to enrich the interpretation of the RETURN main trial findings, and dose variations will be considered in the analysis of study outcomes (i.e. is there an optimum amount of ‘dose’ to elucidate behaviour change? ). Develop a plan for how adherence to the protocol will be monitored. Monitor both intervention delivery and assessment administration Adherence to the intervention protocol was identified as a concern during the feasibility study. Based on observations, only 5/11 (45%) intervention patients received the intended discussion. The following feasibility observation illustrates poor adherence to the prescribed approach: DN02: “ This is the pack; they have spoken to lots of people to make the pack. There are 6 barriers that people told to them. These are common and lots of people said them ”. Observation: DN02 was showing the ‘What Next Booklet’ to the participant but kept it in front of them so the participant was unable to read it. DN02 flashed the booklet and pointed to the barriers. Moving the booklet away again, they read the barriers out one by one. DA0201: “ So we have cost , time , I don’t think I have any problems , trust , embarrassment , anxiety”. The nurse turned the booklet over, and said “and there is also a plan , that is from psychological theory , and there are other materials ”. All the while DA02 kept hold of the booklet. Observation: DN02 then went back to the barrier page, showing the participant and asked: “ Which of these do you relate to? ” I felt this was quite a closed question. There was no conversation about what was stopping them from going. The participant was simply asked to choose which one from the list. Observation 06: Site 02, DN02. To address this in the RETURN main trial, adherence monitoring will be strengthened by considering the challenges of the research context. Indeed, findings from a recent scoping review of fidelity reporting in primary care dental settings suggests the onus/burden of intervention protocol adherence and competency monitoring should sit with research teams. Therefore, in the RETURN main trial, dental nurses will be asked to audio-record 100% of their intervention sessions, rather than alternative monitoring techniques such as asking them to complete checklists after each intervention delivery. Adherence and competency during the RETURN main trial will be monitored by selecting at least one intervention recording per dental nurse each month which the research team will score using pre-determined criteria contained within an intervention specific fidelity checklist (the RETURN checklist, see Table ). The RETURN checklist comprises 6 essential intervention components: overarching communication skills (MI derived), barrier discussion, motivation enhancement through a video, knowledge enhancement through a barrier booklet, goal and action plan setting, intention setting. Each component comprises a combination of theoretical components designed to increase behaviour change capacity (i.e. encouraging the use of SMART principles for goal setting) and practical requirements (i.e. showing the video relevant to the selected barrier). The scoring system takes the form of a Likert-scale: 0 = not implemented, 1 = partially implemented, 2 = substantially implemented, 3 = fully implemented, to give an indication of both adherence and competency. There are no guidelines to inform the optimum ‘level’ of fidelity that should be present in a BCI delivered within dental practices. However, Durlak and DuPre found outcomes were effective in educational interventions if they were delivered with 60-80% fidelity , and a 90% threshold is frequently used in clinical interventions involving psychological therapies . Therefore, a cautious approach will be adopted in the RETURN main trial and a threshold of 80% within each intervention component will be set for a delivery session to be considered to have achieved high fidelity. To provide guidance and to ensure consistency in intervention scoring, a scoring guidance manual was created (see Additional File ). This was developed collaboratively by RETURN researchers using an iterative approach to ensure that the descriptions contained within the manual were understood consistently across the team. The manual was both created and tested using a method whereby audio-recordings of the feasibility intervention sessions were scored independently, results compared, and discrepancies discussed until consensus was achieved (> 80% agreement rate). The development of the fidelity checklist and the scoring guidance manual followed steps three to five as suggested by Walton and colleagues , with an iterative approach utilising feedback from the RETURN researchers to refine the items and scoring guidance. An example of the scoring guidance for the domain of ‘overarching communication skills’ for the demonstration of ‘priorities, beliefs and challenges acknowledged’ is illustrated below: Patient’s priorities, beliefs and challenges acknowledged patients should not be challenged on their beliefs, priorities or challenges experienced previously, even if they are in direct conflict with the principles of the delivery nurse. These should simply be acknowledged as an experience that occurred. Score 0 if patient’s priorities/beliefs are challenged by the nurse e.g. Patient: “I couldn’t get a dentist because there weren’t any” Nurse “There was loads of NHS availability a year ago so that can’t be true”. Score 1 if some attempt is made to acknowledge but the patient’s priorities/beliefs are also challenged e.g. Patient “I couldn’t get a dentist because there weren’t any” Nurse “It sounds like it was really difficult for you to get yourself into the dentist, but there were dentists available”. Score 2 if patient’s priorities/beliefs and challenges are acknowledged during most of the session, but once or twice the nurse challenged the patients on these. Score 3 if acknowledgments rather than challenges are present. Patient: “I couldn’t get a dentist” Nurse: “Sounds like it was really tricky for you to get into a dentist in the past”. Evaluation procedures to support scoring throughout the RETURN main trial will also include the consistent use of the same scoring team and the employment of interrater reliability methods. Where an agreement rate of less than 60% is found between team members responsible for scoring throughout the course of the trial, additional scoring training will take place, again using inter-rater reliability to determine agreement rates. The RETURN checklist has been designed as a multi-functional tool for the implementation of fidelity strategies. Its functions are to act as a standardised training aide, a method to set competency expectations, a means of leveraging feedback to interventionists, a means of monitoring protocol adherence and competency levels throughout the main trial, and to assess the level of fidelity achieved in intervention deliveries at the end of the trial. Develop a plan to record intervention protocol deviations and a method for providing timely feedback to interventionists Several strategies were developed to document and address protocol deviations in the RETURN main trial: A coaching culture will be integrated into the training methodology to promote open communication and rapport between trainers and dental nurses. This aims to facilitate an environment where protocol deviations would be more likely to be reported, and where feedback would be enacted. This will take the form of regular, personalised, and constructive feedback designed to encourage confidence and build both communication and intervention skills. Additionally, monthly, each nurse will have at least one intervention audio-recording evaluated using the RETURN checklist with strengths and any areas for improvement noted. Checklists will be provided to the dental nurses once completed. Where low scores are found, additional intervention sessions will be scored, supplemented with a support site visit. Booster training will trigger where necessary through consistent low scores using the RETURN checklist. The protocol deviation plan will be clearly communicated to dental nurses at the outset of the RETURN main trial set-up phase. This transparent approach aims to cultivate an environment where protocol deviations are viewed as opportunities for learning rather than punitive measures. Develop a user-friendly scripted intervention manual to ensure consistency of delivery and adherence to active ingredients of the treatment Learning from the feasibility study suggested that using scripted approaches to intervention delivery were unsuccessful, as is demonstrated in the following observation: The nurse opened the pack and put it on the table. They read through the patient pack introduction information printed on the materials very quietly, not making eye contact with the patient as they did this. The patient was listening intently, leaning forward slightly to be able to hear what the nurse was saying. I felt some of the meaning was lost during this explanation, as the nurse was so quiet and stilted, it was difficult to hear. The nurse came across as very unconfident and reliant on the written materials. This created no room for the patient discussion. Observation 02: Site 02, DN02. For the RETURN main trial therefore, there will be a conscious move away from scripted materials, and instead, training intensity will be increased. In addition, an easy-to-follow intervention crib sheet was developed (see Additional File ), alongside a written intervention training manual, designed to support intervention delivery beyond training (see Additional File ). Plan for implementation setbacks During the feasibility study, limited resources at sites resulted in just one nurse from each of the two sites taking part in research activities, despite delivering training to multiple nurses in all three sites. As research activities were intended to integrate into nurses’ regular duties within urgent dental care settings, this constraint contributed to recruitment delays, exacerbated by factors such as COVID-19, staff sickness or holiday leave. To address these challenges in the RETURN main trial, additional ‘float’ dental nurses will be employed as part of the core research team to carry our research duties across sites, utilising funds earmarked for reimbursing dental practices for staff time spent on research activities. Furthermore, efforts will be made to train multiple dental nurses at each site, where feasible. These strategies will form part of the early site communications. Minimize contamination between conditions Contamination was not found to be an issue within the feasibility study. Nonetheless, in the RETURN main trial, training will be provided around the importance of allocation adherence. In addition, portable research activity flow charts detailing the specific actions to follow within each study arm will be provided, supported by regular site visits from the research team. Questions will be included in the RETURN trial follow-up pertaining to contamination (i.e. control group question: ‘Did you receive any materials at your urgent care appointment to help you to find a dentist? If so, what did that look like? ). Training Training was identified as an area for improvement during the feasibility study. ‘Hiring’ dental nurses to deliver the RETURN intervention Confidence was found to be a major contributing factor to intervention delivery success, detailed in the feasibility observation below: I passed the booklet back to the nurse, and they started going through the booklet. They didn’t explain what the booklet was for. They read out the title on each page loudly, but the rest of the information on the pages was said very quietly and sound a little muddled. The walk through of the booklet didn’t flow, and it’s more like they were reading it to them themselves under their breath to familiarise themselves with the content. DN02: (page 2) “ For healthy teeth do I need to go? ” “ This is Megan , you can see about her story on the video ”. Page 3 is skipped. Page 4 “ Keeping on top of it ” “ It’s important to go all the time ” Page 5 “ This is a picture of a tooth that only the dentist could see , it shows the decay ”. It is very difficult to hear what DN02 was saying, and the overall feeling is someone who lacks confidence. I felt harsh making them deliver when clearly, they didn’t feel ready with any confidence. Observation 05, Site 01: DN02. At the setup phase of the feasibility study, it was stipulated that effective delivery of the intervention would require experienced nurses with proficient communication skills. This expectation was based on the belief that such traits would facilitate the skills required to successfully deliver the intervention. However, implementation revealed challenges to this ideal. Informal discussions with dental staff at sites, recorded in field notes, noted that dental teams could use trial participation as an opportunity to enhance the communication skills of their staff involved in the research. This experience highlights the existence of conflicting priorities when conducting research. As we found dental nurse attributes cannot be guaranteed, the RETURN main trial training will include elements specifically designed to increase confidence and communication skills, including enhanced role-play and a coaching style training approach. In addition, training sessions will not be fixed in length, and instead provision will be based on individualised need. This will be achievable as shadowing training is planned to occur concurrently with patient recruitment, so as not to hamper trial progress. Standardise training The BCC framework recommends training all interventionists together. In the primary dental care setting, this would require inviting dental teams to converge in a mutually convenient location, and taking staff members out of clinic was found to be problematic during the feasibility study. Instead, in the RETURN main trial, a model will be used where site personnel are trained together. As this method could affect the standardisation of the training delivered, multiple strategies were designed to mitigate that risk: Implementation of a ‘train the trainers’ training model led by a clinical psychologist. Using the same team of trainers throughout the trial. Using identical training materials for each site. Using the same role play tasks with all teams trained. Using a training manual and training videos. The development of a central website to house all training materials, as well as providing hard copies of all materials to all trainees. Use a training content checklist to ensure all training components were delivered to all dental teams (see Additional File ). Ensure dental nurse skill acquisition Skill acquisition was not measured as part of the feasibility study. However, observations demonstrated variation in competency between dental nurses. Therefore, a plan was developed for the RETURN main trial to test skill acquisition during the different phases of training: Training phase 1 Good Clinical Practice Training – A one-hour online module. Skill acquisition measured through an online quiz, with a pass mark of 80%. Training phase 2 Intervention training – three hours, face to face delivery with a mixture of didactic learning, open discussions, and role plays. Skill acquisition measured through discussion and observations by a RETURN trainer through an intervention delivery skill acquisition checklist (see Additional File ). Training phase 3 On the job shadowing training – the amount will depend on demonstration of competencies. Skill acquisition will be measured through in vitro observations using the RETURN checklist. Each interventionist will need to achieve a score of 80% within each intervention component in a single session to be signed off as competent to deliver the intervention independently. Scoring will be conducted by the RETURN trainers and scoring decisions will be supported by the guidance manual. Minimise ‘drift’ in dental nurse skills Skills drift was not explicitly monitored during the feasibility study. However, from feasibility observations, it was discovered that intervention skills needed to be practiced regularly to be maintained. Therefore, a strategy to reduce skills drift was developed for the RETURN main trial: Frequent (at least one per month, per nurse) scoring and feedback of audio-recorded interventions using the scoring checklist, including elaborating strengths and areas for development. Triggered site visits to provide additional booster training and support in the event of low scoring (< 60% in any one component). Triggered (by consistent low scores) or requested reflective practice sessions, wherein a selected audio-recording will be discussed with the dental team at site, focusing on intervention elements that went well, and things that could be improved or done differently. Maintaining a collaborative coaching style approach to all feedback provision, booster training and reflective practice sessions to maintain relationships between the trainers and the dental nurses. Accommodate dental nurse differences Stark differences between the skills and experience levels of the feasibility dental nurses were found. The dental nurses involved in the delivery of the RETURN intervention study were not selected by the research team, they were volunteered by the dental practice owners / managers due to their availability and expression of interest in taking part. DN02 had less than 2 years’ experience of dental nursing and lacked confidence with patient communication. DN01 had more than 10 years’ experience, demonstrated good communication skills and overall was more confident in their approach to the intervention. This quotation from DN02 describes this: Yeah. I don’t know it might be easy for other nurses but for my range of vocabulary to like GCSE, maybe some words I found difficult, and how it works, like the way it’s [training materials] worded was difficult. If it was more informal, like ‘What are we going to do?’ ‘We’re going to do this’. Like a chatty kind of presentation maybe. Interview with DN02 There were also differences in day-to-day responsibilities within their respective dental practices, with DN01 taking a more patient engaged role than DN02. These contrasting quotations demonstrate this: It’s very difficult, you know, especially for nurses because they do not have a lot of contact with patients. It’s only the dentist that takes over everything. So we do our own bit in surgery, cleaning, helping, but we don’t have conversations like that with patients. Interview with DN02 I like talking to patients and I like the interaction and chatting with them and, you know, talking to different people as well and finding out their barriers. I think we seem a bit more human to them as well when we sit down and have a chat with them and we’re not just the scary people who work in the dentist. Interview with DN01. An additional challenge identified during the feasibility study was the need for training to encompass multiple methods, accommodating a wide range of baseline research skill levels. This was highlighted by the following observation on the first day of recruitment at site 02: The nurse [DN02] told me that during the feasibility study training, they didn’t know what the word feasibility meant. They described that this word was in big letters on the very first training slide and all they could think about was wanting to Google what that word meant, so found it difficult to keep up with the rest of the training. Observation 01, Site 02: DN02 To maintain training standardisation whilst also acknowledging the challenge of variation between nurses likely be experienced in the main trial, an ‘on-the-job shadowing’ training element was developed. Shadowing training will involve a RETURN team member ‘chaperoning’ a dental nurse whilst they deliver interventions. Tailored support will be provided alongside real-time verbal and written feedback. This training is not time limited. Training will continue until the nurse both demonstrates competency through the scoring checklist and articulates to the trainers that they feel they have achieved a level of confidence sufficient to deliver the intervention independently. This style of ‘on-the-job’ shadowing training was developed for its ability to be highly individualised, and because it reflects the stye of training routinely undertaken by dental nurses in primary care. Enhance buy-in from dental nurses Enhancement of dental nurse buy-in was considered a priority for the upcoming RETURN trial. Within the dental practice setting, a practice owner often acts as the gatekeeper to research conduct. Those carrying out the research become involved later in the process, with vital opportunities to increase buy-in often missed. Accordingly, a series of dental nurse buy-in strategies were developed for implementation in the RETURN main trial: Continuing Professional Development (CPD) accreditation for all training. Training components designed to explain the purpose of the research, paying particular attention to patient benefit. An early interactive information session including dental nurses, highlighting the opportunities presented by the trial for enhanced patient interaction and training. Inclusion of communication skills training targeted to dental nurses. Monthly newsletters aimed at dental nurses and wider practice staff, with the addition of real dental nurse stories about their involvement in the trial and a quiz and prize element. Engagement lunches for dental nurses as a reward for participation. Use of communication modes congruent with dental nurse preferences i.e. WhatsApp messages rather than emails. Regular site visits to increase self-efficacy and confidence with research activities. Dental nurse awards evening to celebrate trial achievements (i.e. best recruiter etc.) Delivery Use a scripted curriculum or treatment manual Based on feasibility observations, scripts will not be utilised in the RETURN main trial. Instead, a selection of prompts will be provided to the nurses to ensure the intervention’s essential components are delivered. These prompts will take the form of the training manual (including intervention delivery cheat sheets), the intervention crib sheet, and videos demonstrating intervention delivery. Some components however, are ‘scripted’ within the intervention materials themselves, such as the goal and action planning section (see Additional File ). Assess non-specific effects through multiple methods and on an ongoing basis Non-specific factors (such as empathy and components that lend themselves to the target communication style) will be assessed as a stand-alone domain within the RETURN checklist. Nonspecific effects will also specifically be discussed during shadowing training. Ensure both adherence to the protocol and competency of intervention delivery Adherence and competency of intervention deliveries will be assessed through the application of the RETURN checklist throughout the main trial. In addition, 100% of all available recordings will be assessed at the end of the main trial to provide a comprehensive overview of the adherence and competency of intervention deliveries. A fidelity threshold of 80% in every domain per intervention delivery will be applied when scoring the recordings. Receipt Ensure participants’ understanding of the intervention Although data collected from the patients during the feasibility study suggested that patients overwhelmingly found the intervention useful, understandable and relevant, it is helpful to outline here the steps taken to enhance participants’ understanding of the intervention during its development: The RETURN intervention is designed to be engaging, specifically targeted to the trial population. An extensive patient and public involvement (PPI) work stream fed into its design (full details have been published elsewhere ), with the aim of ensuring the materials were culturally relevant, containing congruent messages and images to the trial population. A design company and a professional illustrator were employed to embed these strategies. To account for different learning styles, information was presented and repeated using multiple formats - verbal, written, pictorial and videography. The intervention materials were written to a reading age of 8 years to ensure health literacy inclusivity. Intervention delivery sessions are formatted as reciprocal conversations, and therefore by design, mutual understanding between the patient and dental nurse is embedded. During training and throughout the recruitment period, intervention deliveries will be scored, and feedback provided to ensure that ‘reciprocity’ and patient understanding is embedded, with these criteria factored into the RETURN checklist. Ensure participants’ ability to perform behavioural skills The RETURN intervention seeks to target the behaviour of routine dental appointment visiting. To ensure patients’ ability to perform the behavioural skills required, the intervention was designed to be tailored, considering obstacles unique to individuals’ lives. The intervention culminates in a goal setting and action planning exercise, where participants think through their individual circumstances, and write out SMART (specific, measurable, achievable, relevant and time-bound) goals and plans to help them to overcome their barriers. In this way the target behavioural skills were articulated, discussed and broken down into small actions. From the feasibility observations, this element of the intervention needed improvement, specifically around patient engagement. The nurse put the booklet to one side, and then took the planning booklet from their knee. “We know that writing plans helps”. I felt this introduction didn’t really explain to the participant what the nurse was asking them to do – The nurse looked at me to help as they were getting their words muddled…The nurse devised the plan for the participant, rather than letting the participant make the plan for themselves. The patient set their goal themselves, but they did not put in much detail. They wrote down 3 words and didn’t discuss this with the nurse at all. DN02, Observation 11 For the RETURN main trial, several strategies will therefore be implemented to improve how assessment of behavioural skills were conducted during the intervention delivery sessions: Training will include a dedicated component on how to facilitate goal setting and action planning, emphasising the importance of facilitating and not leading the task, and how to encourage patients to think through and articulate their own mechanisms. Goal setting and action planning have been included on the RETURN checklist, and timely feedback will be provided. A follow-up text message will be sent to participants a week post-intervention including the participants’ own wording from their goals and plans set within the intervention sessions to reinforce behavioural skills and build self-efficacy. The 6-month follow-up telephone call to patients will explore their comprehension of the intervention and how meaningful they found it to track receipt. A component of the intervention conversation will encourage discussion around what was achieved during the intervention session. This has been designed to improve participant receipt of the intervention by setting intentions. This element is also included in the RETURN checklist assessment. Enactment Participant performance of the intervention skills will be assessed in settings in which the intervention might be applied Data was collected from patients of the feasibility study amid the first COVID lockdown restrictions (May – September 2020), and accordingly it was not possible to assess enactment at that time. Therefore, as part of the RETURN main trial telephone follow-up at three time points, questions will be included about whether and how the intervention materials and associated intervention skills had been used since leaving the urgent care dental setting. Questions will focus on which parts of the intervention had been used, whether the intervention skills had been enacted (i.e. phoning for a dental appointment, exploring which dental practice they may like to contact, attending a dental appointment) and how the intervention supported any actions taken to attend a routine dental appointment. Additionally, enactment strategies are embedded within the intervention materials themselves. Some materials are labelled ‘to look at at home’, providing encouragement and support in locating a dentist, making an appointment and thereafter attending an appointment – the behaviours targeted by the intervention. The full RETURN fidelity strategy is summarised in Table . The strategies presented there show the tangible actions taken to attend to the various intervention fidelity recommendations, which may help other researchers to think through the strategies that will apply to their studies (i.e. using audio recordings to monitor skills drift). Explicitly identify and use a theoretical model as a basis for the intervention and ensure the intervention components and measures are reflective of underlying theory The theoretical underpinnings guiding the intervention have been detailed in a publication outlining the intervention’s development process, which includes a comprehensive logic model . Briefly, the intervention draws from multiple theoretical frameworks, incorporating elements of Protection Motivation Theory and Identity-based Motivation Theory . During the feasibility study, it became evident that the conversational aspect of the intervention required a structured approach to enhance standardisation across intervention sessions. Accordingly, Motivational Interviewing (MI) ‘spirit’ was introduced as a framework to provide structure to these conversations in the RETURN main trial, while bolstering the theoretical coherence of intervention deliveries. Ensure consistent intervention dose and develop a monitoring plan to maintain consistency Variations in intervention ‘dose’ were noted during the feasibility study, with session durations ranging from 10 to 37 min, Mean (Standard Deviation (SD)) = 21 minutes. Observations revealed this was primarily influenced by patient engagement and confidence levels of the interventionist. Whilst an intervention duration target of around 15 min was set for pragmatic reasons as part of the intervention design goals for the feasibility study, for the RETURN main trial, a larger emphasis on dose standardisation will be implemented through training. Additionally, to underscore the importance of regulating dose, specific guidance on the duration for each intervention component for the main trial will be given: Barrier discussion – 4 min. Motivation enhancement: video and discussion – 3 min. Knowledge enhancement (guided discussion using booklet materials) – 3 min. Setting Specific, Measurable, Achievable, Relevant, and Time-bound (SMART) goals and action plan – 4 min. Setting intentions at the session’s conclusion – 1 min. However, as a patient-centred approach is inherent in MI techniques where discussions are led by patients, variations in intervention dose will be deemed an acceptable intervention adaptation in the RETURN main trial. This decision is supported by the understanding that patients facing multiple barriers may have longer ‘barrier discussions’ leading to variations in intervention duration. Monitoring of dose will be achieved through audio-recordings, although no corrective measures will be taken to standardise dose in the main trial. Patients also take intervention materials home, and accordingly, questions about additional engagement with the materials will form part of the RETURN main trial follow-ups. Likewise, metadata from the study website will be reviewed to assess whether patients viewed intervention materials at home. This comprehensive approach to dose monitoring aims to enrich the interpretation of the RETURN main trial findings, and dose variations will be considered in the analysis of study outcomes (i.e. is there an optimum amount of ‘dose’ to elucidate behaviour change? ). Develop a plan for how adherence to the protocol will be monitored. Monitor both intervention delivery and assessment administration Adherence to the intervention protocol was identified as a concern during the feasibility study. Based on observations, only 5/11 (45%) intervention patients received the intended discussion. The following feasibility observation illustrates poor adherence to the prescribed approach: DN02: “ This is the pack; they have spoken to lots of people to make the pack. There are 6 barriers that people told to them. These are common and lots of people said them ”. Observation: DN02 was showing the ‘What Next Booklet’ to the participant but kept it in front of them so the participant was unable to read it. DN02 flashed the booklet and pointed to the barriers. Moving the booklet away again, they read the barriers out one by one. DA0201: “ So we have cost , time , I don’t think I have any problems , trust , embarrassment , anxiety”. The nurse turned the booklet over, and said “and there is also a plan , that is from psychological theory , and there are other materials ”. All the while DA02 kept hold of the booklet. Observation: DN02 then went back to the barrier page, showing the participant and asked: “ Which of these do you relate to? ” I felt this was quite a closed question. There was no conversation about what was stopping them from going. The participant was simply asked to choose which one from the list. Observation 06: Site 02, DN02. To address this in the RETURN main trial, adherence monitoring will be strengthened by considering the challenges of the research context. Indeed, findings from a recent scoping review of fidelity reporting in primary care dental settings suggests the onus/burden of intervention protocol adherence and competency monitoring should sit with research teams. Therefore, in the RETURN main trial, dental nurses will be asked to audio-record 100% of their intervention sessions, rather than alternative monitoring techniques such as asking them to complete checklists after each intervention delivery. Adherence and competency during the RETURN main trial will be monitored by selecting at least one intervention recording per dental nurse each month which the research team will score using pre-determined criteria contained within an intervention specific fidelity checklist (the RETURN checklist, see Table ). The RETURN checklist comprises 6 essential intervention components: overarching communication skills (MI derived), barrier discussion, motivation enhancement through a video, knowledge enhancement through a barrier booklet, goal and action plan setting, intention setting. Each component comprises a combination of theoretical components designed to increase behaviour change capacity (i.e. encouraging the use of SMART principles for goal setting) and practical requirements (i.e. showing the video relevant to the selected barrier). The scoring system takes the form of a Likert-scale: 0 = not implemented, 1 = partially implemented, 2 = substantially implemented, 3 = fully implemented, to give an indication of both adherence and competency. There are no guidelines to inform the optimum ‘level’ of fidelity that should be present in a BCI delivered within dental practices. However, Durlak and DuPre found outcomes were effective in educational interventions if they were delivered with 60-80% fidelity , and a 90% threshold is frequently used in clinical interventions involving psychological therapies . Therefore, a cautious approach will be adopted in the RETURN main trial and a threshold of 80% within each intervention component will be set for a delivery session to be considered to have achieved high fidelity. To provide guidance and to ensure consistency in intervention scoring, a scoring guidance manual was created (see Additional File ). This was developed collaboratively by RETURN researchers using an iterative approach to ensure that the descriptions contained within the manual were understood consistently across the team. The manual was both created and tested using a method whereby audio-recordings of the feasibility intervention sessions were scored independently, results compared, and discrepancies discussed until consensus was achieved (> 80% agreement rate). The development of the fidelity checklist and the scoring guidance manual followed steps three to five as suggested by Walton and colleagues , with an iterative approach utilising feedback from the RETURN researchers to refine the items and scoring guidance. An example of the scoring guidance for the domain of ‘overarching communication skills’ for the demonstration of ‘priorities, beliefs and challenges acknowledged’ is illustrated below: Patient’s priorities, beliefs and challenges acknowledged patients should not be challenged on their beliefs, priorities or challenges experienced previously, even if they are in direct conflict with the principles of the delivery nurse. These should simply be acknowledged as an experience that occurred. Score 0 if patient’s priorities/beliefs are challenged by the nurse e.g. Patient: “I couldn’t get a dentist because there weren’t any” Nurse “There was loads of NHS availability a year ago so that can’t be true”. Score 1 if some attempt is made to acknowledge but the patient’s priorities/beliefs are also challenged e.g. Patient “I couldn’t get a dentist because there weren’t any” Nurse “It sounds like it was really difficult for you to get yourself into the dentist, but there were dentists available”. Score 2 if patient’s priorities/beliefs and challenges are acknowledged during most of the session, but once or twice the nurse challenged the patients on these. Score 3 if acknowledgments rather than challenges are present. Patient: “I couldn’t get a dentist” Nurse: “Sounds like it was really tricky for you to get into a dentist in the past”. Evaluation procedures to support scoring throughout the RETURN main trial will also include the consistent use of the same scoring team and the employment of interrater reliability methods. Where an agreement rate of less than 60% is found between team members responsible for scoring throughout the course of the trial, additional scoring training will take place, again using inter-rater reliability to determine agreement rates. The RETURN checklist has been designed as a multi-functional tool for the implementation of fidelity strategies. Its functions are to act as a standardised training aide, a method to set competency expectations, a means of leveraging feedback to interventionists, a means of monitoring protocol adherence and competency levels throughout the main trial, and to assess the level of fidelity achieved in intervention deliveries at the end of the trial. Develop a plan to record intervention protocol deviations and a method for providing timely feedback to interventionists Several strategies were developed to document and address protocol deviations in the RETURN main trial: A coaching culture will be integrated into the training methodology to promote open communication and rapport between trainers and dental nurses. This aims to facilitate an environment where protocol deviations would be more likely to be reported, and where feedback would be enacted. This will take the form of regular, personalised, and constructive feedback designed to encourage confidence and build both communication and intervention skills. Additionally, monthly, each nurse will have at least one intervention audio-recording evaluated using the RETURN checklist with strengths and any areas for improvement noted. Checklists will be provided to the dental nurses once completed. Where low scores are found, additional intervention sessions will be scored, supplemented with a support site visit. Booster training will trigger where necessary through consistent low scores using the RETURN checklist. The protocol deviation plan will be clearly communicated to dental nurses at the outset of the RETURN main trial set-up phase. This transparent approach aims to cultivate an environment where protocol deviations are viewed as opportunities for learning rather than punitive measures. Develop a user-friendly scripted intervention manual to ensure consistency of delivery and adherence to active ingredients of the treatment Learning from the feasibility study suggested that using scripted approaches to intervention delivery were unsuccessful, as is demonstrated in the following observation: The nurse opened the pack and put it on the table. They read through the patient pack introduction information printed on the materials very quietly, not making eye contact with the patient as they did this. The patient was listening intently, leaning forward slightly to be able to hear what the nurse was saying. I felt some of the meaning was lost during this explanation, as the nurse was so quiet and stilted, it was difficult to hear. The nurse came across as very unconfident and reliant on the written materials. This created no room for the patient discussion. Observation 02: Site 02, DN02. For the RETURN main trial therefore, there will be a conscious move away from scripted materials, and instead, training intensity will be increased. In addition, an easy-to-follow intervention crib sheet was developed (see Additional File ), alongside a written intervention training manual, designed to support intervention delivery beyond training (see Additional File ). Plan for implementation setbacks During the feasibility study, limited resources at sites resulted in just one nurse from each of the two sites taking part in research activities, despite delivering training to multiple nurses in all three sites. As research activities were intended to integrate into nurses’ regular duties within urgent dental care settings, this constraint contributed to recruitment delays, exacerbated by factors such as COVID-19, staff sickness or holiday leave. To address these challenges in the RETURN main trial, additional ‘float’ dental nurses will be employed as part of the core research team to carry our research duties across sites, utilising funds earmarked for reimbursing dental practices for staff time spent on research activities. Furthermore, efforts will be made to train multiple dental nurses at each site, where feasible. These strategies will form part of the early site communications. Minimize contamination between conditions Contamination was not found to be an issue within the feasibility study. Nonetheless, in the RETURN main trial, training will be provided around the importance of allocation adherence. In addition, portable research activity flow charts detailing the specific actions to follow within each study arm will be provided, supported by regular site visits from the research team. Questions will be included in the RETURN trial follow-up pertaining to contamination (i.e. control group question: ‘Did you receive any materials at your urgent care appointment to help you to find a dentist? If so, what did that look like? ). The theoretical underpinnings guiding the intervention have been detailed in a publication outlining the intervention’s development process, which includes a comprehensive logic model . Briefly, the intervention draws from multiple theoretical frameworks, incorporating elements of Protection Motivation Theory and Identity-based Motivation Theory . During the feasibility study, it became evident that the conversational aspect of the intervention required a structured approach to enhance standardisation across intervention sessions. Accordingly, Motivational Interviewing (MI) ‘spirit’ was introduced as a framework to provide structure to these conversations in the RETURN main trial, while bolstering the theoretical coherence of intervention deliveries. Variations in intervention ‘dose’ were noted during the feasibility study, with session durations ranging from 10 to 37 min, Mean (Standard Deviation (SD)) = 21 minutes. Observations revealed this was primarily influenced by patient engagement and confidence levels of the interventionist. Whilst an intervention duration target of around 15 min was set for pragmatic reasons as part of the intervention design goals for the feasibility study, for the RETURN main trial, a larger emphasis on dose standardisation will be implemented through training. Additionally, to underscore the importance of regulating dose, specific guidance on the duration for each intervention component for the main trial will be given: Barrier discussion – 4 min. Motivation enhancement: video and discussion – 3 min. Knowledge enhancement (guided discussion using booklet materials) – 3 min. Setting Specific, Measurable, Achievable, Relevant, and Time-bound (SMART) goals and action plan – 4 min. Setting intentions at the session’s conclusion – 1 min. However, as a patient-centred approach is inherent in MI techniques where discussions are led by patients, variations in intervention dose will be deemed an acceptable intervention adaptation in the RETURN main trial. This decision is supported by the understanding that patients facing multiple barriers may have longer ‘barrier discussions’ leading to variations in intervention duration. Monitoring of dose will be achieved through audio-recordings, although no corrective measures will be taken to standardise dose in the main trial. Patients also take intervention materials home, and accordingly, questions about additional engagement with the materials will form part of the RETURN main trial follow-ups. Likewise, metadata from the study website will be reviewed to assess whether patients viewed intervention materials at home. This comprehensive approach to dose monitoring aims to enrich the interpretation of the RETURN main trial findings, and dose variations will be considered in the analysis of study outcomes (i.e. is there an optimum amount of ‘dose’ to elucidate behaviour change? ). Adherence to the intervention protocol was identified as a concern during the feasibility study. Based on observations, only 5/11 (45%) intervention patients received the intended discussion. The following feasibility observation illustrates poor adherence to the prescribed approach: DN02: “ This is the pack; they have spoken to lots of people to make the pack. There are 6 barriers that people told to them. These are common and lots of people said them ”. Observation: DN02 was showing the ‘What Next Booklet’ to the participant but kept it in front of them so the participant was unable to read it. DN02 flashed the booklet and pointed to the barriers. Moving the booklet away again, they read the barriers out one by one. DA0201: “ So we have cost , time , I don’t think I have any problems , trust , embarrassment , anxiety”. The nurse turned the booklet over, and said “and there is also a plan , that is from psychological theory , and there are other materials ”. All the while DA02 kept hold of the booklet. Observation: DN02 then went back to the barrier page, showing the participant and asked: “ Which of these do you relate to? ” I felt this was quite a closed question. There was no conversation about what was stopping them from going. The participant was simply asked to choose which one from the list. Observation 06: Site 02, DN02. To address this in the RETURN main trial, adherence monitoring will be strengthened by considering the challenges of the research context. Indeed, findings from a recent scoping review of fidelity reporting in primary care dental settings suggests the onus/burden of intervention protocol adherence and competency monitoring should sit with research teams. Therefore, in the RETURN main trial, dental nurses will be asked to audio-record 100% of their intervention sessions, rather than alternative monitoring techniques such as asking them to complete checklists after each intervention delivery. Adherence and competency during the RETURN main trial will be monitored by selecting at least one intervention recording per dental nurse each month which the research team will score using pre-determined criteria contained within an intervention specific fidelity checklist (the RETURN checklist, see Table ). The RETURN checklist comprises 6 essential intervention components: overarching communication skills (MI derived), barrier discussion, motivation enhancement through a video, knowledge enhancement through a barrier booklet, goal and action plan setting, intention setting. Each component comprises a combination of theoretical components designed to increase behaviour change capacity (i.e. encouraging the use of SMART principles for goal setting) and practical requirements (i.e. showing the video relevant to the selected barrier). The scoring system takes the form of a Likert-scale: 0 = not implemented, 1 = partially implemented, 2 = substantially implemented, 3 = fully implemented, to give an indication of both adherence and competency. There are no guidelines to inform the optimum ‘level’ of fidelity that should be present in a BCI delivered within dental practices. However, Durlak and DuPre found outcomes were effective in educational interventions if they were delivered with 60-80% fidelity , and a 90% threshold is frequently used in clinical interventions involving psychological therapies . Therefore, a cautious approach will be adopted in the RETURN main trial and a threshold of 80% within each intervention component will be set for a delivery session to be considered to have achieved high fidelity. To provide guidance and to ensure consistency in intervention scoring, a scoring guidance manual was created (see Additional File ). This was developed collaboratively by RETURN researchers using an iterative approach to ensure that the descriptions contained within the manual were understood consistently across the team. The manual was both created and tested using a method whereby audio-recordings of the feasibility intervention sessions were scored independently, results compared, and discrepancies discussed until consensus was achieved (> 80% agreement rate). The development of the fidelity checklist and the scoring guidance manual followed steps three to five as suggested by Walton and colleagues , with an iterative approach utilising feedback from the RETURN researchers to refine the items and scoring guidance. An example of the scoring guidance for the domain of ‘overarching communication skills’ for the demonstration of ‘priorities, beliefs and challenges acknowledged’ is illustrated below: Patient’s priorities, beliefs and challenges acknowledged patients should not be challenged on their beliefs, priorities or challenges experienced previously, even if they are in direct conflict with the principles of the delivery nurse. These should simply be acknowledged as an experience that occurred. Score 0 if patient’s priorities/beliefs are challenged by the nurse e.g. Patient: “I couldn’t get a dentist because there weren’t any” Nurse “There was loads of NHS availability a year ago so that can’t be true”. Score 1 if some attempt is made to acknowledge but the patient’s priorities/beliefs are also challenged e.g. Patient “I couldn’t get a dentist because there weren’t any” Nurse “It sounds like it was really difficult for you to get yourself into the dentist, but there were dentists available”. Score 2 if patient’s priorities/beliefs and challenges are acknowledged during most of the session, but once or twice the nurse challenged the patients on these. Score 3 if acknowledgments rather than challenges are present. Patient: “I couldn’t get a dentist” Nurse: “Sounds like it was really tricky for you to get into a dentist in the past”. Evaluation procedures to support scoring throughout the RETURN main trial will also include the consistent use of the same scoring team and the employment of interrater reliability methods. Where an agreement rate of less than 60% is found between team members responsible for scoring throughout the course of the trial, additional scoring training will take place, again using inter-rater reliability to determine agreement rates. The RETURN checklist has been designed as a multi-functional tool for the implementation of fidelity strategies. Its functions are to act as a standardised training aide, a method to set competency expectations, a means of leveraging feedback to interventionists, a means of monitoring protocol adherence and competency levels throughout the main trial, and to assess the level of fidelity achieved in intervention deliveries at the end of the trial. patients should not be challenged on their beliefs, priorities or challenges experienced previously, even if they are in direct conflict with the principles of the delivery nurse. These should simply be acknowledged as an experience that occurred. Score 0 if patient’s priorities/beliefs are challenged by the nurse e.g. Patient: “I couldn’t get a dentist because there weren’t any” Nurse “There was loads of NHS availability a year ago so that can’t be true”. Score 1 if some attempt is made to acknowledge but the patient’s priorities/beliefs are also challenged e.g. Patient “I couldn’t get a dentist because there weren’t any” Nurse “It sounds like it was really difficult for you to get yourself into the dentist, but there were dentists available”. Score 2 if patient’s priorities/beliefs and challenges are acknowledged during most of the session, but once or twice the nurse challenged the patients on these. Score 3 if acknowledgments rather than challenges are present. Patient: “I couldn’t get a dentist” Nurse: “Sounds like it was really tricky for you to get into a dentist in the past”. Evaluation procedures to support scoring throughout the RETURN main trial will also include the consistent use of the same scoring team and the employment of interrater reliability methods. Where an agreement rate of less than 60% is found between team members responsible for scoring throughout the course of the trial, additional scoring training will take place, again using inter-rater reliability to determine agreement rates. The RETURN checklist has been designed as a multi-functional tool for the implementation of fidelity strategies. Its functions are to act as a standardised training aide, a method to set competency expectations, a means of leveraging feedback to interventionists, a means of monitoring protocol adherence and competency levels throughout the main trial, and to assess the level of fidelity achieved in intervention deliveries at the end of the trial. Several strategies were developed to document and address protocol deviations in the RETURN main trial: A coaching culture will be integrated into the training methodology to promote open communication and rapport between trainers and dental nurses. This aims to facilitate an environment where protocol deviations would be more likely to be reported, and where feedback would be enacted. This will take the form of regular, personalised, and constructive feedback designed to encourage confidence and build both communication and intervention skills. Additionally, monthly, each nurse will have at least one intervention audio-recording evaluated using the RETURN checklist with strengths and any areas for improvement noted. Checklists will be provided to the dental nurses once completed. Where low scores are found, additional intervention sessions will be scored, supplemented with a support site visit. Booster training will trigger where necessary through consistent low scores using the RETURN checklist. The protocol deviation plan will be clearly communicated to dental nurses at the outset of the RETURN main trial set-up phase. This transparent approach aims to cultivate an environment where protocol deviations are viewed as opportunities for learning rather than punitive measures. Learning from the feasibility study suggested that using scripted approaches to intervention delivery were unsuccessful, as is demonstrated in the following observation: The nurse opened the pack and put it on the table. They read through the patient pack introduction information printed on the materials very quietly, not making eye contact with the patient as they did this. The patient was listening intently, leaning forward slightly to be able to hear what the nurse was saying. I felt some of the meaning was lost during this explanation, as the nurse was so quiet and stilted, it was difficult to hear. The nurse came across as very unconfident and reliant on the written materials. This created no room for the patient discussion. Observation 02: Site 02, DN02. For the RETURN main trial therefore, there will be a conscious move away from scripted materials, and instead, training intensity will be increased. In addition, an easy-to-follow intervention crib sheet was developed (see Additional File ), alongside a written intervention training manual, designed to support intervention delivery beyond training (see Additional File ). During the feasibility study, limited resources at sites resulted in just one nurse from each of the two sites taking part in research activities, despite delivering training to multiple nurses in all three sites. As research activities were intended to integrate into nurses’ regular duties within urgent dental care settings, this constraint contributed to recruitment delays, exacerbated by factors such as COVID-19, staff sickness or holiday leave. To address these challenges in the RETURN main trial, additional ‘float’ dental nurses will be employed as part of the core research team to carry our research duties across sites, utilising funds earmarked for reimbursing dental practices for staff time spent on research activities. Furthermore, efforts will be made to train multiple dental nurses at each site, where feasible. These strategies will form part of the early site communications. Contamination was not found to be an issue within the feasibility study. Nonetheless, in the RETURN main trial, training will be provided around the importance of allocation adherence. In addition, portable research activity flow charts detailing the specific actions to follow within each study arm will be provided, supported by regular site visits from the research team. Questions will be included in the RETURN trial follow-up pertaining to contamination (i.e. control group question: ‘Did you receive any materials at your urgent care appointment to help you to find a dentist? If so, what did that look like? ). Training was identified as an area for improvement during the feasibility study. ‘Hiring’ dental nurses to deliver the RETURN intervention Confidence was found to be a major contributing factor to intervention delivery success, detailed in the feasibility observation below: I passed the booklet back to the nurse, and they started going through the booklet. They didn’t explain what the booklet was for. They read out the title on each page loudly, but the rest of the information on the pages was said very quietly and sound a little muddled. The walk through of the booklet didn’t flow, and it’s more like they were reading it to them themselves under their breath to familiarise themselves with the content. DN02: (page 2) “ For healthy teeth do I need to go? ” “ This is Megan , you can see about her story on the video ”. Page 3 is skipped. Page 4 “ Keeping on top of it ” “ It’s important to go all the time ” Page 5 “ This is a picture of a tooth that only the dentist could see , it shows the decay ”. It is very difficult to hear what DN02 was saying, and the overall feeling is someone who lacks confidence. I felt harsh making them deliver when clearly, they didn’t feel ready with any confidence. Observation 05, Site 01: DN02. At the setup phase of the feasibility study, it was stipulated that effective delivery of the intervention would require experienced nurses with proficient communication skills. This expectation was based on the belief that such traits would facilitate the skills required to successfully deliver the intervention. However, implementation revealed challenges to this ideal. Informal discussions with dental staff at sites, recorded in field notes, noted that dental teams could use trial participation as an opportunity to enhance the communication skills of their staff involved in the research. This experience highlights the existence of conflicting priorities when conducting research. As we found dental nurse attributes cannot be guaranteed, the RETURN main trial training will include elements specifically designed to increase confidence and communication skills, including enhanced role-play and a coaching style training approach. In addition, training sessions will not be fixed in length, and instead provision will be based on individualised need. This will be achievable as shadowing training is planned to occur concurrently with patient recruitment, so as not to hamper trial progress. Standardise training The BCC framework recommends training all interventionists together. In the primary dental care setting, this would require inviting dental teams to converge in a mutually convenient location, and taking staff members out of clinic was found to be problematic during the feasibility study. Instead, in the RETURN main trial, a model will be used where site personnel are trained together. As this method could affect the standardisation of the training delivered, multiple strategies were designed to mitigate that risk: Implementation of a ‘train the trainers’ training model led by a clinical psychologist. Using the same team of trainers throughout the trial. Using identical training materials for each site. Using the same role play tasks with all teams trained. Using a training manual and training videos. The development of a central website to house all training materials, as well as providing hard copies of all materials to all trainees. Use a training content checklist to ensure all training components were delivered to all dental teams (see Additional File ). Ensure dental nurse skill acquisition Skill acquisition was not measured as part of the feasibility study. However, observations demonstrated variation in competency between dental nurses. Therefore, a plan was developed for the RETURN main trial to test skill acquisition during the different phases of training: Training phase 1 Good Clinical Practice Training – A one-hour online module. Skill acquisition measured through an online quiz, with a pass mark of 80%. Training phase 2 Intervention training – three hours, face to face delivery with a mixture of didactic learning, open discussions, and role plays. Skill acquisition measured through discussion and observations by a RETURN trainer through an intervention delivery skill acquisition checklist (see Additional File ). Training phase 3 On the job shadowing training – the amount will depend on demonstration of competencies. Skill acquisition will be measured through in vitro observations using the RETURN checklist. Each interventionist will need to achieve a score of 80% within each intervention component in a single session to be signed off as competent to deliver the intervention independently. Scoring will be conducted by the RETURN trainers and scoring decisions will be supported by the guidance manual. Minimise ‘drift’ in dental nurse skills Skills drift was not explicitly monitored during the feasibility study. However, from feasibility observations, it was discovered that intervention skills needed to be practiced regularly to be maintained. Therefore, a strategy to reduce skills drift was developed for the RETURN main trial: Frequent (at least one per month, per nurse) scoring and feedback of audio-recorded interventions using the scoring checklist, including elaborating strengths and areas for development. Triggered site visits to provide additional booster training and support in the event of low scoring (< 60% in any one component). Triggered (by consistent low scores) or requested reflective practice sessions, wherein a selected audio-recording will be discussed with the dental team at site, focusing on intervention elements that went well, and things that could be improved or done differently. Maintaining a collaborative coaching style approach to all feedback provision, booster training and reflective practice sessions to maintain relationships between the trainers and the dental nurses. Accommodate dental nurse differences Stark differences between the skills and experience levels of the feasibility dental nurses were found. The dental nurses involved in the delivery of the RETURN intervention study were not selected by the research team, they were volunteered by the dental practice owners / managers due to their availability and expression of interest in taking part. DN02 had less than 2 years’ experience of dental nursing and lacked confidence with patient communication. DN01 had more than 10 years’ experience, demonstrated good communication skills and overall was more confident in their approach to the intervention. This quotation from DN02 describes this: Yeah. I don’t know it might be easy for other nurses but for my range of vocabulary to like GCSE, maybe some words I found difficult, and how it works, like the way it’s [training materials] worded was difficult. If it was more informal, like ‘What are we going to do?’ ‘We’re going to do this’. Like a chatty kind of presentation maybe. Interview with DN02 There were also differences in day-to-day responsibilities within their respective dental practices, with DN01 taking a more patient engaged role than DN02. These contrasting quotations demonstrate this: It’s very difficult, you know, especially for nurses because they do not have a lot of contact with patients. It’s only the dentist that takes over everything. So we do our own bit in surgery, cleaning, helping, but we don’t have conversations like that with patients. Interview with DN02 I like talking to patients and I like the interaction and chatting with them and, you know, talking to different people as well and finding out their barriers. I think we seem a bit more human to them as well when we sit down and have a chat with them and we’re not just the scary people who work in the dentist. Interview with DN01. An additional challenge identified during the feasibility study was the need for training to encompass multiple methods, accommodating a wide range of baseline research skill levels. This was highlighted by the following observation on the first day of recruitment at site 02: The nurse [DN02] told me that during the feasibility study training, they didn’t know what the word feasibility meant. They described that this word was in big letters on the very first training slide and all they could think about was wanting to Google what that word meant, so found it difficult to keep up with the rest of the training. Observation 01, Site 02: DN02 To maintain training standardisation whilst also acknowledging the challenge of variation between nurses likely be experienced in the main trial, an ‘on-the-job shadowing’ training element was developed. Shadowing training will involve a RETURN team member ‘chaperoning’ a dental nurse whilst they deliver interventions. Tailored support will be provided alongside real-time verbal and written feedback. This training is not time limited. Training will continue until the nurse both demonstrates competency through the scoring checklist and articulates to the trainers that they feel they have achieved a level of confidence sufficient to deliver the intervention independently. This style of ‘on-the-job’ shadowing training was developed for its ability to be highly individualised, and because it reflects the stye of training routinely undertaken by dental nurses in primary care. Enhance buy-in from dental nurses Enhancement of dental nurse buy-in was considered a priority for the upcoming RETURN trial. Within the dental practice setting, a practice owner often acts as the gatekeeper to research conduct. Those carrying out the research become involved later in the process, with vital opportunities to increase buy-in often missed. Accordingly, a series of dental nurse buy-in strategies were developed for implementation in the RETURN main trial: Continuing Professional Development (CPD) accreditation for all training. Training components designed to explain the purpose of the research, paying particular attention to patient benefit. An early interactive information session including dental nurses, highlighting the opportunities presented by the trial for enhanced patient interaction and training. Inclusion of communication skills training targeted to dental nurses. Monthly newsletters aimed at dental nurses and wider practice staff, with the addition of real dental nurse stories about their involvement in the trial and a quiz and prize element. Engagement lunches for dental nurses as a reward for participation. Use of communication modes congruent with dental nurse preferences i.e. WhatsApp messages rather than emails. Regular site visits to increase self-efficacy and confidence with research activities. Dental nurse awards evening to celebrate trial achievements (i.e. best recruiter etc.) Confidence was found to be a major contributing factor to intervention delivery success, detailed in the feasibility observation below: I passed the booklet back to the nurse, and they started going through the booklet. They didn’t explain what the booklet was for. They read out the title on each page loudly, but the rest of the information on the pages was said very quietly and sound a little muddled. The walk through of the booklet didn’t flow, and it’s more like they were reading it to them themselves under their breath to familiarise themselves with the content. DN02: (page 2) “ For healthy teeth do I need to go? ” “ This is Megan , you can see about her story on the video ”. Page 3 is skipped. Page 4 “ Keeping on top of it ” “ It’s important to go all the time ” Page 5 “ This is a picture of a tooth that only the dentist could see , it shows the decay ”. It is very difficult to hear what DN02 was saying, and the overall feeling is someone who lacks confidence. I felt harsh making them deliver when clearly, they didn’t feel ready with any confidence. Observation 05, Site 01: DN02. At the setup phase of the feasibility study, it was stipulated that effective delivery of the intervention would require experienced nurses with proficient communication skills. This expectation was based on the belief that such traits would facilitate the skills required to successfully deliver the intervention. However, implementation revealed challenges to this ideal. Informal discussions with dental staff at sites, recorded in field notes, noted that dental teams could use trial participation as an opportunity to enhance the communication skills of their staff involved in the research. This experience highlights the existence of conflicting priorities when conducting research. As we found dental nurse attributes cannot be guaranteed, the RETURN main trial training will include elements specifically designed to increase confidence and communication skills, including enhanced role-play and a coaching style training approach. In addition, training sessions will not be fixed in length, and instead provision will be based on individualised need. This will be achievable as shadowing training is planned to occur concurrently with patient recruitment, so as not to hamper trial progress. The BCC framework recommends training all interventionists together. In the primary dental care setting, this would require inviting dental teams to converge in a mutually convenient location, and taking staff members out of clinic was found to be problematic during the feasibility study. Instead, in the RETURN main trial, a model will be used where site personnel are trained together. As this method could affect the standardisation of the training delivered, multiple strategies were designed to mitigate that risk: Implementation of a ‘train the trainers’ training model led by a clinical psychologist. Using the same team of trainers throughout the trial. Using identical training materials for each site. Using the same role play tasks with all teams trained. Using a training manual and training videos. The development of a central website to house all training materials, as well as providing hard copies of all materials to all trainees. Use a training content checklist to ensure all training components were delivered to all dental teams (see Additional File ). Skill acquisition was not measured as part of the feasibility study. However, observations demonstrated variation in competency between dental nurses. Therefore, a plan was developed for the RETURN main trial to test skill acquisition during the different phases of training: Training phase 1 Good Clinical Practice Training – A one-hour online module. Skill acquisition measured through an online quiz, with a pass mark of 80%. Training phase 2 Intervention training – three hours, face to face delivery with a mixture of didactic learning, open discussions, and role plays. Skill acquisition measured through discussion and observations by a RETURN trainer through an intervention delivery skill acquisition checklist (see Additional File ). Training phase 3 On the job shadowing training – the amount will depend on demonstration of competencies. Skill acquisition will be measured through in vitro observations using the RETURN checklist. Each interventionist will need to achieve a score of 80% within each intervention component in a single session to be signed off as competent to deliver the intervention independently. Scoring will be conducted by the RETURN trainers and scoring decisions will be supported by the guidance manual. Good Clinical Practice Training – A one-hour online module. Skill acquisition measured through an online quiz, with a pass mark of 80%. Intervention training – three hours, face to face delivery with a mixture of didactic learning, open discussions, and role plays. Skill acquisition measured through discussion and observations by a RETURN trainer through an intervention delivery skill acquisition checklist (see Additional File ). On the job shadowing training – the amount will depend on demonstration of competencies. Skill acquisition will be measured through in vitro observations using the RETURN checklist. Each interventionist will need to achieve a score of 80% within each intervention component in a single session to be signed off as competent to deliver the intervention independently. Scoring will be conducted by the RETURN trainers and scoring decisions will be supported by the guidance manual. Skills drift was not explicitly monitored during the feasibility study. However, from feasibility observations, it was discovered that intervention skills needed to be practiced regularly to be maintained. Therefore, a strategy to reduce skills drift was developed for the RETURN main trial: Frequent (at least one per month, per nurse) scoring and feedback of audio-recorded interventions using the scoring checklist, including elaborating strengths and areas for development. Triggered site visits to provide additional booster training and support in the event of low scoring (< 60% in any one component). Triggered (by consistent low scores) or requested reflective practice sessions, wherein a selected audio-recording will be discussed with the dental team at site, focusing on intervention elements that went well, and things that could be improved or done differently. Maintaining a collaborative coaching style approach to all feedback provision, booster training and reflective practice sessions to maintain relationships between the trainers and the dental nurses. Stark differences between the skills and experience levels of the feasibility dental nurses were found. The dental nurses involved in the delivery of the RETURN intervention study were not selected by the research team, they were volunteered by the dental practice owners / managers due to their availability and expression of interest in taking part. DN02 had less than 2 years’ experience of dental nursing and lacked confidence with patient communication. DN01 had more than 10 years’ experience, demonstrated good communication skills and overall was more confident in their approach to the intervention. This quotation from DN02 describes this: Yeah. I don’t know it might be easy for other nurses but for my range of vocabulary to like GCSE, maybe some words I found difficult, and how it works, like the way it’s [training materials] worded was difficult. If it was more informal, like ‘What are we going to do?’ ‘We’re going to do this’. Like a chatty kind of presentation maybe. Interview with DN02 There were also differences in day-to-day responsibilities within their respective dental practices, with DN01 taking a more patient engaged role than DN02. These contrasting quotations demonstrate this: It’s very difficult, you know, especially for nurses because they do not have a lot of contact with patients. It’s only the dentist that takes over everything. So we do our own bit in surgery, cleaning, helping, but we don’t have conversations like that with patients. Interview with DN02 I like talking to patients and I like the interaction and chatting with them and, you know, talking to different people as well and finding out their barriers. I think we seem a bit more human to them as well when we sit down and have a chat with them and we’re not just the scary people who work in the dentist. Interview with DN01. An additional challenge identified during the feasibility study was the need for training to encompass multiple methods, accommodating a wide range of baseline research skill levels. This was highlighted by the following observation on the first day of recruitment at site 02: The nurse [DN02] told me that during the feasibility study training, they didn’t know what the word feasibility meant. They described that this word was in big letters on the very first training slide and all they could think about was wanting to Google what that word meant, so found it difficult to keep up with the rest of the training. Observation 01, Site 02: DN02 To maintain training standardisation whilst also acknowledging the challenge of variation between nurses likely be experienced in the main trial, an ‘on-the-job shadowing’ training element was developed. Shadowing training will involve a RETURN team member ‘chaperoning’ a dental nurse whilst they deliver interventions. Tailored support will be provided alongside real-time verbal and written feedback. This training is not time limited. Training will continue until the nurse both demonstrates competency through the scoring checklist and articulates to the trainers that they feel they have achieved a level of confidence sufficient to deliver the intervention independently. This style of ‘on-the-job’ shadowing training was developed for its ability to be highly individualised, and because it reflects the stye of training routinely undertaken by dental nurses in primary care. Enhancement of dental nurse buy-in was considered a priority for the upcoming RETURN trial. Within the dental practice setting, a practice owner often acts as the gatekeeper to research conduct. Those carrying out the research become involved later in the process, with vital opportunities to increase buy-in often missed. Accordingly, a series of dental nurse buy-in strategies were developed for implementation in the RETURN main trial: Continuing Professional Development (CPD) accreditation for all training. Training components designed to explain the purpose of the research, paying particular attention to patient benefit. An early interactive information session including dental nurses, highlighting the opportunities presented by the trial for enhanced patient interaction and training. Inclusion of communication skills training targeted to dental nurses. Monthly newsletters aimed at dental nurses and wider practice staff, with the addition of real dental nurse stories about their involvement in the trial and a quiz and prize element. Engagement lunches for dental nurses as a reward for participation. Use of communication modes congruent with dental nurse preferences i.e. WhatsApp messages rather than emails. Regular site visits to increase self-efficacy and confidence with research activities. Dental nurse awards evening to celebrate trial achievements (i.e. best recruiter etc.) Use a scripted curriculum or treatment manual Based on feasibility observations, scripts will not be utilised in the RETURN main trial. Instead, a selection of prompts will be provided to the nurses to ensure the intervention’s essential components are delivered. These prompts will take the form of the training manual (including intervention delivery cheat sheets), the intervention crib sheet, and videos demonstrating intervention delivery. Some components however, are ‘scripted’ within the intervention materials themselves, such as the goal and action planning section (see Additional File ). Assess non-specific effects through multiple methods and on an ongoing basis Non-specific factors (such as empathy and components that lend themselves to the target communication style) will be assessed as a stand-alone domain within the RETURN checklist. Nonspecific effects will also specifically be discussed during shadowing training. Ensure both adherence to the protocol and competency of intervention delivery Adherence and competency of intervention deliveries will be assessed through the application of the RETURN checklist throughout the main trial. In addition, 100% of all available recordings will be assessed at the end of the main trial to provide a comprehensive overview of the adherence and competency of intervention deliveries. A fidelity threshold of 80% in every domain per intervention delivery will be applied when scoring the recordings. Based on feasibility observations, scripts will not be utilised in the RETURN main trial. Instead, a selection of prompts will be provided to the nurses to ensure the intervention’s essential components are delivered. These prompts will take the form of the training manual (including intervention delivery cheat sheets), the intervention crib sheet, and videos demonstrating intervention delivery. Some components however, are ‘scripted’ within the intervention materials themselves, such as the goal and action planning section (see Additional File ). Non-specific factors (such as empathy and components that lend themselves to the target communication style) will be assessed as a stand-alone domain within the RETURN checklist. Nonspecific effects will also specifically be discussed during shadowing training. Adherence and competency of intervention deliveries will be assessed through the application of the RETURN checklist throughout the main trial. In addition, 100% of all available recordings will be assessed at the end of the main trial to provide a comprehensive overview of the adherence and competency of intervention deliveries. A fidelity threshold of 80% in every domain per intervention delivery will be applied when scoring the recordings. Ensure participants’ understanding of the intervention Although data collected from the patients during the feasibility study suggested that patients overwhelmingly found the intervention useful, understandable and relevant, it is helpful to outline here the steps taken to enhance participants’ understanding of the intervention during its development: The RETURN intervention is designed to be engaging, specifically targeted to the trial population. An extensive patient and public involvement (PPI) work stream fed into its design (full details have been published elsewhere ), with the aim of ensuring the materials were culturally relevant, containing congruent messages and images to the trial population. A design company and a professional illustrator were employed to embed these strategies. To account for different learning styles, information was presented and repeated using multiple formats - verbal, written, pictorial and videography. The intervention materials were written to a reading age of 8 years to ensure health literacy inclusivity. Intervention delivery sessions are formatted as reciprocal conversations, and therefore by design, mutual understanding between the patient and dental nurse is embedded. During training and throughout the recruitment period, intervention deliveries will be scored, and feedback provided to ensure that ‘reciprocity’ and patient understanding is embedded, with these criteria factored into the RETURN checklist. Ensure participants’ ability to perform behavioural skills The RETURN intervention seeks to target the behaviour of routine dental appointment visiting. To ensure patients’ ability to perform the behavioural skills required, the intervention was designed to be tailored, considering obstacles unique to individuals’ lives. The intervention culminates in a goal setting and action planning exercise, where participants think through their individual circumstances, and write out SMART (specific, measurable, achievable, relevant and time-bound) goals and plans to help them to overcome their barriers. In this way the target behavioural skills were articulated, discussed and broken down into small actions. From the feasibility observations, this element of the intervention needed improvement, specifically around patient engagement. The nurse put the booklet to one side, and then took the planning booklet from their knee. “We know that writing plans helps”. I felt this introduction didn’t really explain to the participant what the nurse was asking them to do – The nurse looked at me to help as they were getting their words muddled…The nurse devised the plan for the participant, rather than letting the participant make the plan for themselves. The patient set their goal themselves, but they did not put in much detail. They wrote down 3 words and didn’t discuss this with the nurse at all. DN02, Observation 11 For the RETURN main trial, several strategies will therefore be implemented to improve how assessment of behavioural skills were conducted during the intervention delivery sessions: Training will include a dedicated component on how to facilitate goal setting and action planning, emphasising the importance of facilitating and not leading the task, and how to encourage patients to think through and articulate their own mechanisms. Goal setting and action planning have been included on the RETURN checklist, and timely feedback will be provided. A follow-up text message will be sent to participants a week post-intervention including the participants’ own wording from their goals and plans set within the intervention sessions to reinforce behavioural skills and build self-efficacy. The 6-month follow-up telephone call to patients will explore their comprehension of the intervention and how meaningful they found it to track receipt. A component of the intervention conversation will encourage discussion around what was achieved during the intervention session. This has been designed to improve participant receipt of the intervention by setting intentions. This element is also included in the RETURN checklist assessment. Although data collected from the patients during the feasibility study suggested that patients overwhelmingly found the intervention useful, understandable and relevant, it is helpful to outline here the steps taken to enhance participants’ understanding of the intervention during its development: The RETURN intervention is designed to be engaging, specifically targeted to the trial population. An extensive patient and public involvement (PPI) work stream fed into its design (full details have been published elsewhere ), with the aim of ensuring the materials were culturally relevant, containing congruent messages and images to the trial population. A design company and a professional illustrator were employed to embed these strategies. To account for different learning styles, information was presented and repeated using multiple formats - verbal, written, pictorial and videography. The intervention materials were written to a reading age of 8 years to ensure health literacy inclusivity. Intervention delivery sessions are formatted as reciprocal conversations, and therefore by design, mutual understanding between the patient and dental nurse is embedded. During training and throughout the recruitment period, intervention deliveries will be scored, and feedback provided to ensure that ‘reciprocity’ and patient understanding is embedded, with these criteria factored into the RETURN checklist. The RETURN intervention seeks to target the behaviour of routine dental appointment visiting. To ensure patients’ ability to perform the behavioural skills required, the intervention was designed to be tailored, considering obstacles unique to individuals’ lives. The intervention culminates in a goal setting and action planning exercise, where participants think through their individual circumstances, and write out SMART (specific, measurable, achievable, relevant and time-bound) goals and plans to help them to overcome their barriers. In this way the target behavioural skills were articulated, discussed and broken down into small actions. From the feasibility observations, this element of the intervention needed improvement, specifically around patient engagement. The nurse put the booklet to one side, and then took the planning booklet from their knee. “We know that writing plans helps”. I felt this introduction didn’t really explain to the participant what the nurse was asking them to do – The nurse looked at me to help as they were getting their words muddled…The nurse devised the plan for the participant, rather than letting the participant make the plan for themselves. The patient set their goal themselves, but they did not put in much detail. They wrote down 3 words and didn’t discuss this with the nurse at all. DN02, Observation 11 For the RETURN main trial, several strategies will therefore be implemented to improve how assessment of behavioural skills were conducted during the intervention delivery sessions: Training will include a dedicated component on how to facilitate goal setting and action planning, emphasising the importance of facilitating and not leading the task, and how to encourage patients to think through and articulate their own mechanisms. Goal setting and action planning have been included on the RETURN checklist, and timely feedback will be provided. A follow-up text message will be sent to participants a week post-intervention including the participants’ own wording from their goals and plans set within the intervention sessions to reinforce behavioural skills and build self-efficacy. The 6-month follow-up telephone call to patients will explore their comprehension of the intervention and how meaningful they found it to track receipt. A component of the intervention conversation will encourage discussion around what was achieved during the intervention session. This has been designed to improve participant receipt of the intervention by setting intentions. This element is also included in the RETURN checklist assessment. Participant performance of the intervention skills will be assessed in settings in which the intervention might be applied Data was collected from patients of the feasibility study amid the first COVID lockdown restrictions (May – September 2020), and accordingly it was not possible to assess enactment at that time. Therefore, as part of the RETURN main trial telephone follow-up at three time points, questions will be included about whether and how the intervention materials and associated intervention skills had been used since leaving the urgent care dental setting. Questions will focus on which parts of the intervention had been used, whether the intervention skills had been enacted (i.e. phoning for a dental appointment, exploring which dental practice they may like to contact, attending a dental appointment) and how the intervention supported any actions taken to attend a routine dental appointment. Additionally, enactment strategies are embedded within the intervention materials themselves. Some materials are labelled ‘to look at at home’, providing encouragement and support in locating a dentist, making an appointment and thereafter attending an appointment – the behaviours targeted by the intervention. The full RETURN fidelity strategy is summarised in Table . The strategies presented there show the tangible actions taken to attend to the various intervention fidelity recommendations, which may help other researchers to think through the strategies that will apply to their studies (i.e. using audio recordings to monitor skills drift). Data was collected from patients of the feasibility study amid the first COVID lockdown restrictions (May – September 2020), and accordingly it was not possible to assess enactment at that time. Therefore, as part of the RETURN main trial telephone follow-up at three time points, questions will be included about whether and how the intervention materials and associated intervention skills had been used since leaving the urgent care dental setting. Questions will focus on which parts of the intervention had been used, whether the intervention skills had been enacted (i.e. phoning for a dental appointment, exploring which dental practice they may like to contact, attending a dental appointment) and how the intervention supported any actions taken to attend a routine dental appointment. Additionally, enactment strategies are embedded within the intervention materials themselves. Some materials are labelled ‘to look at at home’, providing encouragement and support in locating a dentist, making an appointment and thereafter attending an appointment – the behaviours targeted by the intervention. The full RETURN fidelity strategy is summarised in Table . The strategies presented there show the tangible actions taken to attend to the various intervention fidelity recommendations, which may help other researchers to think through the strategies that will apply to their studies (i.e. using audio recordings to monitor skills drift). This article presents a comprehensive fidelity strategy to be embedded within the RETURN main trial. To the best of the authors’ knowledge, this is the first published fidelity strategy for the testing of a BCI in the primary care dental setting. This strategy has sought to balance the needs of both the research and the dental practice context. Research has shown that outcomes are improved when interventions are delivered with a high degree of fidelity , and one review found that effect sizes are at least 2 to 3 times higher when interventions are delivered with high intervention fidelity . In addition, by devising and implementing a robust fidelity plan, theoretically this allows for the assessment of ‘infidelity’ and for exploration of how differences in fidelity may be associated with outcomes . The development of a comprehensive fidelity strategy for use in the RETURN main trial therefore seeks to provide the methodological assurances necessary to determine whether the RETURN intervention is effective or not. In addition, published fidelity strategies can serve as blueprints for other researchers, enabling the replication of interventions across various settings or populations . This facilitates accurate implementation and consistency among studies, ultimately promoting the reproducibility of findings. Furthermore, the dissemination of fidelity strategies enhances transparency and accountability in research, allowing stakeholders—including funding agencies, peer reviewers, and the broader scientific community—to assess the rigor and validity of study methodologies, thereby ensuring ethical conduct and integrity in research practices . We would encourage the publication of fidelity strategies as a way of sharing best practice to others in the field. One strong message from the findings of the feasibility analysis was the importance of the role of the research team in dental research. It is clear from this study that research teams facilitating BCIs in the primary dental care context need to be mindful of the constraints of the setting and the pressures and skill mix of the healthcare professionals within it. Whilst the primary aim of the strategies developed for the RETURN main trial is to enhance intervention fidelity, a secondary aim of the selected strategies is to minimise burden for the dental teams involved. This is important as the primary dental care setting routinely incurs challenges such as time and staffing limitations , and for sites working to provide urgent clinical care in a target driven remuneration system (such as the NHS) as in RETURN, there are additional pressures which may well have been exacerbated post-COVID-19 . An example of this ‘shift’ in burden is where the decision made to audio-record all of the intervention delivery sessions to monitor skills and assess delivery fidelity (rather than other methods such as asking the dental nurses to complete check-lists as has been adopted in other primary care dental trials ). This will reduce the time and process burden on the dental nurses within the wider context of the RCT which, outside of the intervention delivery, has its own lengthy procedural requirements (e.g. consenting, randomising, data collection, data entry etc.). Additionally, the use of audio-recordings for fidelity monitoring is considered the gold-standard , and whilst acknowledging that this method is researcher resource intensive, it is deemed the most appropriate method for use in the primary dental care context. This publication provides a thorough description of the RETURN fidelity strategy, which should be considered alongside the RETURN main trial results when they are published to assure the scientific integrity of our research practices. Limitations This study has several limitations. Observations and interviews were conducted at only two sites with two dental nurses due to the small-scale nature of the feasibility study and early termination due to the COVID-19 pandemic. This means that a narrow range of perspectives were included in the findings used to develop the fidelity strategy. Additionally, several BCC recommendations were not fully implemented in the RETURN fidelity strategy. Specifically: Monitoring of Control Participants : The fidelity plan did not include monitoring control participant activities, although post-delivery participant self-report contamination assessments were conducted. This decision was made to enhance the acceptability of audio recordings among dental teams and patients. Only intervention delivery sessions were recorded, with control group conduct comprehensively covered in training. Protocol Review Group : A protocol review group was not established to ensure the active ingredients of the intervention were fully operationalised due to resources limitations. Nevertheless, the intervention and training plan received input from two psychologists who were part of the RETURN team. Matching Interventionist Characteristics : Due to constraints within the setting, it was not feasible to match key characteristics of the trial population with those delivering the intervention. However, a deliberate choice was made to involve dental nurses rather than dentists in the study design to facilitate rapport building. This decision was support by PPI work. Use of Independent Coders : The use of independent coders was not feasible for the RETURN main trial due to resource limitations. However, as per recommendations by Borelli , the coding team was blind to outcome data. Pre and post-test measures : To minimise dental staff burden and maintain proportionate measures, pre- and post-test process and knowledge assessments were not utilised in the RETURN main trial. The RETURN feasibility study was conducted in England, UK within the context of the NHS primary dental care system. Within this system, the dental sites involved in the delivery of the RETURN research were providing commissioned urgent dental care in accordance with agreed General Dental Services (GDS) contracts. GDS contracts require a pre-defined number of Units of Dental Activity (UDAs) to be fulfilled within a year and for an agreed remuneration value . However, any unmet clinical delivery targets can result in financial claw-back. Given that within this system, there was no additional capacity allocated for the delivery of the RETURN research, there was potential for tension between clinical contractual obligations and research delivery. Consequently, the RETURN fidelity strategy was developed with this context in mind. Whilst the dental practice owners were financially compensated for staff time spent on the research delivery, no additional levers were in place to build research capacity, with those ‘doing’ the research often delivering their usual role alongside. This resulted in challenges and meant that the RETURN research team needed to be mindful of the context, and the fidelity strategies developed. In addition, those delivering the research (in this case dental nurses) were not directly in receipt of any remuneration for their additional efforts, and this was also considered during the development of our strategy. Comparisons of this context to other primary care dental health systems across the globe suggests alternative fidelity strategies could be employed to best suit different contexts, and that the fidelity approach laid out within the paper may need to be adapted to better suit each context. For example, in some States in the United States of America (USA), the dental health system is solely privately funded . A private dental health system, not constrained by targets in the same way as the GDS contracts within the NHS, in theory, could have more scope for research taking place alongside clinical delivery without it being squeezed out by business pressures. This could lead to a lighter-touch fidelity strategy employment. In addition, schemes such as the National Dental Practice-Based Research Network in the USA or the Australian Dental Practice-Based Research Network which build capacity for research delivery within dental practices could play a key role in research facilitation which again, may alter fidelity strategy approaches. Finally, an important consideration to acknowledge is what it means for an RCT where researchers intervene to influence intervention fidelity. It could be argued that by manipulating intervention fidelity it changes the implementation landscape to an extent that recorded outcomes are no longer the simple effect of trial allocation, and therefore, that this may not be representative of outcomes that would be achieved in the real-world . However, for reasons described throughout this article, the primary dental care environment is unique and challenging, and therefore the decision was taken to ensure that intervention fidelity will be closely monitored and improved throughout the RETURN main trial. Dental teams can be inexperienced with conducting research trials , and researchers also have a responsibility to those involved in the conduct of research to provide support and guidance to ensure the best outcomes for the trial. Conclusions and implications The fidelity strategy outlined in this paper serves as a blueprint for researchers conducting BCI trials in primary dental care settings. This environment presents unique challenges, necessitating a contextually informed approach to enhance fidelity. However, many strategies detailed herein could be transferable to other BCI trials in similar contexts, despite being specifically tailored for the RETURN trial. The RETURN fidelity strategy summary could be a useful tool for other BCI trialists in the primary dental care setting, as this provides tangible examples of how the strategies were operationalised. We image that the extent of these strategies will alter depending on the context, resource availability and the intervention itself. This publication outlines a best practice approach and should be read in conjunction with the forthcoming results of the RETURN main trial. This study has several limitations. Observations and interviews were conducted at only two sites with two dental nurses due to the small-scale nature of the feasibility study and early termination due to the COVID-19 pandemic. This means that a narrow range of perspectives were included in the findings used to develop the fidelity strategy. Additionally, several BCC recommendations were not fully implemented in the RETURN fidelity strategy. Specifically: Monitoring of Control Participants : The fidelity plan did not include monitoring control participant activities, although post-delivery participant self-report contamination assessments were conducted. This decision was made to enhance the acceptability of audio recordings among dental teams and patients. Only intervention delivery sessions were recorded, with control group conduct comprehensively covered in training. Protocol Review Group : A protocol review group was not established to ensure the active ingredients of the intervention were fully operationalised due to resources limitations. Nevertheless, the intervention and training plan received input from two psychologists who were part of the RETURN team. Matching Interventionist Characteristics : Due to constraints within the setting, it was not feasible to match key characteristics of the trial population with those delivering the intervention. However, a deliberate choice was made to involve dental nurses rather than dentists in the study design to facilitate rapport building. This decision was support by PPI work. Use of Independent Coders : The use of independent coders was not feasible for the RETURN main trial due to resource limitations. However, as per recommendations by Borelli , the coding team was blind to outcome data. Pre and post-test measures : To minimise dental staff burden and maintain proportionate measures, pre- and post-test process and knowledge assessments were not utilised in the RETURN main trial. The RETURN feasibility study was conducted in England, UK within the context of the NHS primary dental care system. Within this system, the dental sites involved in the delivery of the RETURN research were providing commissioned urgent dental care in accordance with agreed General Dental Services (GDS) contracts. GDS contracts require a pre-defined number of Units of Dental Activity (UDAs) to be fulfilled within a year and for an agreed remuneration value . However, any unmet clinical delivery targets can result in financial claw-back. Given that within this system, there was no additional capacity allocated for the delivery of the RETURN research, there was potential for tension between clinical contractual obligations and research delivery. Consequently, the RETURN fidelity strategy was developed with this context in mind. Whilst the dental practice owners were financially compensated for staff time spent on the research delivery, no additional levers were in place to build research capacity, with those ‘doing’ the research often delivering their usual role alongside. This resulted in challenges and meant that the RETURN research team needed to be mindful of the context, and the fidelity strategies developed. In addition, those delivering the research (in this case dental nurses) were not directly in receipt of any remuneration for their additional efforts, and this was also considered during the development of our strategy. Comparisons of this context to other primary care dental health systems across the globe suggests alternative fidelity strategies could be employed to best suit different contexts, and that the fidelity approach laid out within the paper may need to be adapted to better suit each context. For example, in some States in the United States of America (USA), the dental health system is solely privately funded . A private dental health system, not constrained by targets in the same way as the GDS contracts within the NHS, in theory, could have more scope for research taking place alongside clinical delivery without it being squeezed out by business pressures. This could lead to a lighter-touch fidelity strategy employment. In addition, schemes such as the National Dental Practice-Based Research Network in the USA or the Australian Dental Practice-Based Research Network which build capacity for research delivery within dental practices could play a key role in research facilitation which again, may alter fidelity strategy approaches. Finally, an important consideration to acknowledge is what it means for an RCT where researchers intervene to influence intervention fidelity. It could be argued that by manipulating intervention fidelity it changes the implementation landscape to an extent that recorded outcomes are no longer the simple effect of trial allocation, and therefore, that this may not be representative of outcomes that would be achieved in the real-world . However, for reasons described throughout this article, the primary dental care environment is unique and challenging, and therefore the decision was taken to ensure that intervention fidelity will be closely monitored and improved throughout the RETURN main trial. Dental teams can be inexperienced with conducting research trials , and researchers also have a responsibility to those involved in the conduct of research to provide support and guidance to ensure the best outcomes for the trial. The fidelity strategy outlined in this paper serves as a blueprint for researchers conducting BCI trials in primary dental care settings. This environment presents unique challenges, necessitating a contextually informed approach to enhance fidelity. However, many strategies detailed herein could be transferable to other BCI trials in similar contexts, despite being specifically tailored for the RETURN trial. The RETURN fidelity strategy summary could be a useful tool for other BCI trialists in the primary dental care setting, as this provides tangible examples of how the strategies were operationalised. We image that the extent of these strategies will alter depending on the context, resource availability and the intervention itself. This publication outlines a best practice approach and should be read in conjunction with the forthcoming results of the RETURN main trial. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4 Supplementary Material 5 Supplementary Material 6
Cardiometabolic risk factors as determinants of peripheral nerve function: the Maastricht Study
28be967a-afed-476d-9f6a-8bc049a13d09
7351845
Physiology[mh]
Diabetic neuropathy is one of the most common complications of diabetes mellitus , and a major cause of reduced quality of life, gait disturbances, foot ulceration, fall-related injuries and disability . During their lifetimes, up to 50% of patients with type 2 diabetes are affected by some form of neuropathy, of which distal symmetric polyneuropathy is most common . Moreover, neuropathy was already present in 10–20% of patients at the time of diagnosis of type 2 diabetes , suggesting that neuropathy is initiated in early stages of the pathogenesis of diabetes. Indeed, studies have demonstrated that neuropathy is present in the prediabetic stage , although not consistently . Traditionally, it has been suggested that hyperglycaemia is the main driver of microvascular damage and subsequent neuropathy. Therefore, glycaemic control is considered fundamental in its prevention . However, a study in patients with type 2 diabetes showed that the aggregation of components for the metabolic syndrome was significantly associated with sensory neuropathy . In subsequent studies, the metabolic syndrome has been associated with neuropathy regardless of the presence of (pre)diabetes , but not consistently . As increased blood glucose levels, even in the non-diabetic range, as well as other cardiometabolic risk factors could contribute to microvascular dysfunction, we postulated that each of these factors contributes to a progressive decline of nerve function, before the development of type 2 diabetes or overt neuropathy. To examine this, studies are needed that do not dichotomise risk factors (such as the presence of the metabolic syndrome) or dichotomise outcomes (such as the presence of neuropathy), but that analyse risk factors and outcomes as continuous variables. In addition, estimates of the prevalence of neuropathy may vary depending on the methods used, which may have contributed to the discrepancies in reported associations between metabolic risk markers and neuropathy . Assessing nerve function as a continuous measure with objective electrophysiological techniques may therefore be more relevant to study the aforementioned associations . However, such population-based studies are scarce and mainly focus on components of the metabolic syndrome . In light of the above, our aim was to examine the associations of multiple classical and newer cardiometabolic risk factors and mildly elevated blood glucose levels (such as in prediabetes [i.e. impaired fasting glucose and/or impaired glucose tolerance]) with measures of motor and sensory nerve function assessed by electrophysiological techniques in a large, population-based cohort: the Maastricht Study. In addition, we assess their associations with clinical measures such as vibration perception threshold (VPT) and neuropathic pain. We hypothesised that unfavourable cardiometabolic risk and elevated blood glucose levels within the prediabetic range are associated with impaired nerve function, independently of fasting glucose and of each other. Population We used data from the Maastricht Study, an observational, prospective, population-based cohort study. The rationale and methodology have been described previously (also see electronic supplementary material [ESM] Methods). The present report includes cross-sectional data from 3451 participants, who completed the baseline survey between November 2010 and September 2013. The study complies with the Declaration of Helsinki and has been approved by the institutional medical ethics committee (NL31329.068.10) and the Minister of Health, Welfare and Sports of the Netherlands (Permit 131088-105234-PG). All participants gave written, informed consent. Risk factors We considered the following cardiometabolic risk factors: age, fasting glucose, HbA 1c , 2 h post-load glucose (for additional analyses), triacylglycerol, HDL- and LDL-cholesterol, waist circumference, inflammation, office systolic blood pressure (24 h blood pressure for additional analyses) and diabetes status. In addition, we considered smoking, lipid-modifying and antihypertensive medications, and the metabolic syndrome. Details of assessments have been previously described . Inflammation markers were measured in plasma and included high-sensitivity C-reactive protein (CRP), serum amyloid A (SAA), IL-6, IL-8, TNF-α and soluble intercellular adhesion molecule-1 (sICAM-1). These were converted into a sum-score for analyses, calculated by summation of the individual z scores of inflammation markers. Such a summary score predicted future cardiovascular events and mortality in earlier studies . Use of medication was assessed during a medication interview. Smoking behaviour was derived from a questionnaire. To determine diabetes status, all participants (except those who used insulin) underwent an OGTT after an overnight fast . Participants were categorised according to the World Health Organization 2006 criteria into normal glucose metabolism (NGM), prediabetes (impaired fasting glucose and/or impaired glucose tolerance) or type 2 diabetes. The metabolic syndrome was defined according to the Adult Treatment Panel (ATPIII) guidelines (see ESM Methods). Nerve conduction study Nerve function of the lower limbs was assessed with a Medelec Synergy electromyography apparatus (version 15.0, Viasys Healthcare, UK) using surface electrodes. Before testing, feet and lower legs were warmed in warm water (38°C) for a minimum duration of 10 min, to ensure that skin temperature (measured on the dorsal surface of the foot) was >32°C. Motor peroneal and tibial nerves and sensory sural nerve were examined at supra-maximal stimulation. Peroneal nerve function was recorded on the right leg, at the digitorum brevis muscle with stimulations at the ankle (8 cm proximal from the recording site), below the fibular head and above the fibular head. Tibial nerve function was recorded on the left leg, at the abductor hallucis muscle with stimulations at the ankle and in the popliteal fossa. Sural nerve function was recorded on the left leg between the lateral malleolus and the Achilles tendon while stimulating 12 cm proximal to the recording site. Variables analysed were compound muscle action potential (CMAP) amplitudes (stimulated at the ankle), nerve conduction velocities (NCV) of the peroneal and tibial nerves, and the sural sensory nerve action potential (SNAP) amplitude and NCV. Peripheral vibration perception Peripheral VPT was tested by use of a Horwell Neurothesiometer (Scientific Laboratory Supplies, Nottingham, UK). Vibration thresholds were tested three times at the distal phalanx of the hallux on both feet. Mean threshold was calculated for each foot and the highest mean threshold was used for analyses. Neuropathic pain Neuropathic pain was defined as a score ≥3 on the DN4 interview . Covariates Questionnaires were used to collect information on age, sex, educational level, alcohol consumption (high consumer [women >7 glasses per week; men >14 glasses per week]), cardiovascular disease history (see ESM Methods) and mobility limitations (defined as having difficulty walking 500 m or climbing stairs). Kidney function (estimated glomerular filtration rate [in ml min −1 1.73 m −2 ]) was calculated from serum creatinine and cystatin . Statistical analyses First, population characteristics and measures of nerve function were described for the total population and by tertiles of sural SNAP amplitude using the appropriate descriptive statistics. Second, associations between cardiometabolic risk factors and nerve function were examined with standardised linear regression analyses. All continuous risk factors and the six outcomes of nerve function were standardised to z scores (with a mean of 0 and an SD of 1) in order to compare the magnitudes of observed associations between all risk factors and outcomes (see ESM methods for details). Two models were fitted with covariates that we selected a priori. In the first model, associations were adjusted for age, sex, height, educational level, skin temperature at start of nerve function assessment and heating time. In the second model, all associations were additionally adjusted for all other risk factors as well as alcohol intake, cardiovascular disease history, mobility limitations and kidney function. In addition, a composite score for nerve function was calculated as the mean of z scores of individual measures of nerve function. A composite score is considered to be more sensitive and reproducible for detection of peripheral neuropathy than individual attributes of nerve conduction , and we report this score to summarise the associations with nerve conduction outcomes. Associations were expressed as standardised regression coefficients ( β ) with 95% CIs. For undetectable sural nerve responses ( n = 165), the likelihood of an absent response (OR with 95% CI) was calculated using logistic regression analyses using similar adjustments as described above. Third, we examined the associations of prediabetes and type 2 diabetes with nerve function. Associations were adjusted for age, sex, height, waist circumference, inflammation, smoking, alcohol intake, cardiovascular disease history, mobility limitations, skin temperature at start of nerve function assessment and heating time. To test for a linear trend across NGM, prediabetes and type 2 diabetes, glucose metabolism status was categorised (NGM = 0, prediabetes = 1 and type 2 diabetes = 2) and used in the linear regression models. Fourth, we examined the associations of cardiometabolic risk factors with VPT (linear regression) and with neuropathic pain (logistic regression) in similar models as described above. In addition, the associations of the metabolic syndrome (overall) and the number of criteria for the metabolic syndrome (3, 4 or 5 criteria vs 0–2 criteria) with nerve function were examined. Potential interaction effects of sex and of type 2 diabetes were assessed by computing interaction terms (sex × risk factor and type 2 diabetes × risk factor) and adding these (separately) in the fully adjusted models. No interaction effect of sex was observed. Overall, analyses stratified on type 2 diabetes yielded non-significant differences, except for inflammation (see below). Therefore, we present the analyses for the total population in the main manuscript and stratified analyses are presented in the ESM. All analyses were performed using SPSS version 25.0 (IBM Corp, Armonk, NY, USA). We used data from the Maastricht Study, an observational, prospective, population-based cohort study. The rationale and methodology have been described previously (also see electronic supplementary material [ESM] Methods). The present report includes cross-sectional data from 3451 participants, who completed the baseline survey between November 2010 and September 2013. The study complies with the Declaration of Helsinki and has been approved by the institutional medical ethics committee (NL31329.068.10) and the Minister of Health, Welfare and Sports of the Netherlands (Permit 131088-105234-PG). All participants gave written, informed consent. We considered the following cardiometabolic risk factors: age, fasting glucose, HbA 1c , 2 h post-load glucose (for additional analyses), triacylglycerol, HDL- and LDL-cholesterol, waist circumference, inflammation, office systolic blood pressure (24 h blood pressure for additional analyses) and diabetes status. In addition, we considered smoking, lipid-modifying and antihypertensive medications, and the metabolic syndrome. Details of assessments have been previously described . Inflammation markers were measured in plasma and included high-sensitivity C-reactive protein (CRP), serum amyloid A (SAA), IL-6, IL-8, TNF-α and soluble intercellular adhesion molecule-1 (sICAM-1). These were converted into a sum-score for analyses, calculated by summation of the individual z scores of inflammation markers. Such a summary score predicted future cardiovascular events and mortality in earlier studies . Use of medication was assessed during a medication interview. Smoking behaviour was derived from a questionnaire. To determine diabetes status, all participants (except those who used insulin) underwent an OGTT after an overnight fast . Participants were categorised according to the World Health Organization 2006 criteria into normal glucose metabolism (NGM), prediabetes (impaired fasting glucose and/or impaired glucose tolerance) or type 2 diabetes. The metabolic syndrome was defined according to the Adult Treatment Panel (ATPIII) guidelines (see ESM Methods). Nerve function of the lower limbs was assessed with a Medelec Synergy electromyography apparatus (version 15.0, Viasys Healthcare, UK) using surface electrodes. Before testing, feet and lower legs were warmed in warm water (38°C) for a minimum duration of 10 min, to ensure that skin temperature (measured on the dorsal surface of the foot) was >32°C. Motor peroneal and tibial nerves and sensory sural nerve were examined at supra-maximal stimulation. Peroneal nerve function was recorded on the right leg, at the digitorum brevis muscle with stimulations at the ankle (8 cm proximal from the recording site), below the fibular head and above the fibular head. Tibial nerve function was recorded on the left leg, at the abductor hallucis muscle with stimulations at the ankle and in the popliteal fossa. Sural nerve function was recorded on the left leg between the lateral malleolus and the Achilles tendon while stimulating 12 cm proximal to the recording site. Variables analysed were compound muscle action potential (CMAP) amplitudes (stimulated at the ankle), nerve conduction velocities (NCV) of the peroneal and tibial nerves, and the sural sensory nerve action potential (SNAP) amplitude and NCV. Peripheral VPT was tested by use of a Horwell Neurothesiometer (Scientific Laboratory Supplies, Nottingham, UK). Vibration thresholds were tested three times at the distal phalanx of the hallux on both feet. Mean threshold was calculated for each foot and the highest mean threshold was used for analyses. Neuropathic pain was defined as a score ≥3 on the DN4 interview . Questionnaires were used to collect information on age, sex, educational level, alcohol consumption (high consumer [women >7 glasses per week; men >14 glasses per week]), cardiovascular disease history (see ESM Methods) and mobility limitations (defined as having difficulty walking 500 m or climbing stairs). Kidney function (estimated glomerular filtration rate [in ml min −1 1.73 m −2 ]) was calculated from serum creatinine and cystatin . First, population characteristics and measures of nerve function were described for the total population and by tertiles of sural SNAP amplitude using the appropriate descriptive statistics. Second, associations between cardiometabolic risk factors and nerve function were examined with standardised linear regression analyses. All continuous risk factors and the six outcomes of nerve function were standardised to z scores (with a mean of 0 and an SD of 1) in order to compare the magnitudes of observed associations between all risk factors and outcomes (see ESM methods for details). Two models were fitted with covariates that we selected a priori. In the first model, associations were adjusted for age, sex, height, educational level, skin temperature at start of nerve function assessment and heating time. In the second model, all associations were additionally adjusted for all other risk factors as well as alcohol intake, cardiovascular disease history, mobility limitations and kidney function. In addition, a composite score for nerve function was calculated as the mean of z scores of individual measures of nerve function. A composite score is considered to be more sensitive and reproducible for detection of peripheral neuropathy than individual attributes of nerve conduction , and we report this score to summarise the associations with nerve conduction outcomes. Associations were expressed as standardised regression coefficients ( β ) with 95% CIs. For undetectable sural nerve responses ( n = 165), the likelihood of an absent response (OR with 95% CI) was calculated using logistic regression analyses using similar adjustments as described above. Third, we examined the associations of prediabetes and type 2 diabetes with nerve function. Associations were adjusted for age, sex, height, waist circumference, inflammation, smoking, alcohol intake, cardiovascular disease history, mobility limitations, skin temperature at start of nerve function assessment and heating time. To test for a linear trend across NGM, prediabetes and type 2 diabetes, glucose metabolism status was categorised (NGM = 0, prediabetes = 1 and type 2 diabetes = 2) and used in the linear regression models. Fourth, we examined the associations of cardiometabolic risk factors with VPT (linear regression) and with neuropathic pain (logistic regression) in similar models as described above. In addition, the associations of the metabolic syndrome (overall) and the number of criteria for the metabolic syndrome (3, 4 or 5 criteria vs 0–2 criteria) with nerve function were examined. Potential interaction effects of sex and of type 2 diabetes were assessed by computing interaction terms (sex × risk factor and type 2 diabetes × risk factor) and adding these (separately) in the fully adjusted models. No interaction effect of sex was observed. Overall, analyses stratified on type 2 diabetes yielded non-significant differences, except for inflammation (see below). Therefore, we present the analyses for the total population in the main manuscript and stratified analyses are presented in the ESM. All analyses were performed using SPSS version 25.0 (IBM Corp, Armonk, NY, USA). Population Data were available for 2401 participants. A flow diagram with details is provided in ESM Fig. . Compared with participants included in this study, those excluded had a similar distribution of sex, but were older, had a higher BMI and more often had type 2 diabetes (ESM Table ). In Table , the population characteristics are provided for the total population and by tertiles of sural SNAP amplitude. Compared with those in the highest tertile, those in the lowest tertile were older, more often male, had elevated levels of multiple cardiovascular risk factors, had higher prevalence of the metabolic syndrome and type 2 diabetes, and reported more frequently the use of medication. Age Age (unit = 8.2 years) was inversely associated with nerve function. Associations with sural nerve SNAP amplitude and tibial nerve CMAP amplitude were most pronounced: β = −0.30 (−0.35, −0.25) and β = −0.31 (−0.36, −0.25), respectively (Fig. ). Further, age was associated with higher VPT (Fig. ), but not with neuropathic pain (Fig. ). Fasting glucose and HbA 1c Higher glucose level (unit = 1.6 mmol/l) was associated with worse nerve function for all measures of nerve function and the associations with peroneal and tibial NCV were β = −0.17 SD (−0.21, −0.13) and β = −0.18 SD (−0.23, −0.14), respectively appeared to be strongest. For HbA 1c (unit = 9.6 mmol/mol (3.0%)), similar associations were observed (Fig. ). Both glucose and HbA 1c were also associated with higher VPT and neuropathic pain (Fig. ). Waist circumference Larger waist circumference (unit = 13.1 cm) was associated with lower sural SNAP ( β = −0.08 SD [−0.13, −0.02]) and tibial CMAP amplitude ( β = −0.15 SD [−0.20, −0.10]). Unexpectedly, it was also associated with higher peroneal CMAP amplitude and NCV (Fig. ). Further, waist circumference was associated with higher VPT, β = 0.08 SD (0.04, 0.13) (Fig. ), but not with neuropathic pain (Fig. ). Triacylglycerol, HDL- and LDL-cholesterol, and lipid-modifying medication Higher levels of triacylglycerol (unit = 0.9 mmol/l) were not associated with nerve function (Fig. ). However, triacylglycerol was associated with lower tibial nerve function in model 1 (ESM Fig. ). HDL-cholesterol (unit = 0.5 mmol/l) was not associated with better nerve function (Fig. ). LDL-cholesterol (unit = 1.0 mmol/l) appeared to be associated with better nerve function. The use of lipid-modifying medication appeared to be associated with lower nerve function in model 1 (ESM Fig. ), but these associations were attenuated (some even reversed) in the fully adjusted models. Similarly, lipid-modifying medication was associated with higher VPT and neuropathic pain, but not in fully adjusted models (Fig. ). Systolic blood pressure and antihypertensive medication Higher systolic blood pressure (unit = 17.8 mmHg) was not associated with worse nerve function. The use of antihypertensive medication was associated with lower nerve function (Fig. ), specifically with peroneal CMAP amplitude and NCV: β = −0.13 (−0.23, −0.03) and β = −0.16 (−0.25, −0.07), respectively (Fig. ). Blood pressure and use of antihypertensive medication were not associated with VPT or neuropathic pain in fully adjusted models (Fig. ). Inflammation Inflammation (unit = z score) was associated with worse nerve function: β = −0.04 (−0.07, 0.00) (Fig. ). Associations of inflammation with higher VPT and neuropathic pain were observed, but were not statistically significant in fully adjusted models (Fig. ). However, an interaction effect of diabetes status was observed, and therefore, in ESM Fig. , associations with individual inflammation markers are presented stratified on the presence of type 2 diabetes. Inflammation was only associated with lower nerve function and VPT in those with type 2 diabetes. Smoking Current smoking (vs never smoking) was associated with lower nerve function: β = −0.11 SD (−0.17, −0.04) (Fig. ). Former smokers also had lower peroneal NCV: β = −0.12 (−0.20, −0.05) (not shown). Smoking was also associated with higher VPT ( β = 0.17 [0.06, 0.28]) and neuropathic pain (OR 2.13 [1.38, 3.29]) (Fig. ). Absent sural response The associations between cardiometabolic risk factors and absent sural nerve response ( n = 165) are shown in ESM Fig. . We observed greater odds for an absent sural nerve response for higher age, fasting glucose, Hb1 Ac and waist circumference, consistent with the associations observed above. Diabetes status Type 2 diabetes was associated with worse nerve function (Fig. ). Further, prediabetes appeared to be associated with worse nerve function, although this was only statistically significant for peroneal NCV: β = −0.11 SD (−0.21, −0.01). Nonetheless, linear trend analyses showed a consistent trend across NGM, prediabetes and type 2 diabetes; p < 0.01 for all measures of nerve function. Type 2 diabetes was also associated with higher VPT ( β = 0.19 SD [0.10, 0.28]) and neuropathic pain (OR 2.03 [1.39, 2.95]) (Fig. and Fig. , respectively). Further, a trend across NGM, prediabetes and type 2 diabetes was seen for VPT and neuropathic pain (both p < 0.001). Analyses stratified on diabetes status In ESM Figs and , associations with nerve conduction measures are shown stratified on type 2 diabetes. Overall, results were similar between those with and those without type 2 diabetes. Fasting glucose and HbA 1c were not, whereas waist circumference was, associated with VPT (ESM Fig. ) and neuropathic pain (ESM Fig. ) in those without type 2 diabetes. The metabolic syndrome Presence of the metabolic syndrome was associated with worse sural SNAP amplitude, tibial CMAP amplitude and tibial NCV: β = −0.13 SD (−0.22, −0.04), β = −0.15 SD (−0.23, −0.06) and β = −0.09 SD (−0.17, 0), respectively (ESM Fig. ). A linear trend was observed across 0–2, 3, 4 and 5 criteria of the metabolic syndrome for four out of six measures of nerve function. Similarly, a trend was seen with increasing number of criteria for the metabolic syndrome and higher VPT ( p = 0.05) and neuropathic pain ( p = 0.027). In additional analyses, we replaced fasting glucose with 2 h post-load OGTT values ( n = 2252). These associations appeared somewhat weaker compared with fasting glucose or HbA 1c (ESM Fig. ). Further, we substituted HDL-cholesterol with total-to-HDL-cholesterol ratio. This yielded similar findings. Substituting systolic with diastolic blood pressure or with blood pressure values derived from 24 h measurement also resulted in similar findings (ESM Fig. ). Data were available for 2401 participants. A flow diagram with details is provided in ESM Fig. . Compared with participants included in this study, those excluded had a similar distribution of sex, but were older, had a higher BMI and more often had type 2 diabetes (ESM Table ). In Table , the population characteristics are provided for the total population and by tertiles of sural SNAP amplitude. Compared with those in the highest tertile, those in the lowest tertile were older, more often male, had elevated levels of multiple cardiovascular risk factors, had higher prevalence of the metabolic syndrome and type 2 diabetes, and reported more frequently the use of medication. Age (unit = 8.2 years) was inversely associated with nerve function. Associations with sural nerve SNAP amplitude and tibial nerve CMAP amplitude were most pronounced: β = −0.30 (−0.35, −0.25) and β = −0.31 (−0.36, −0.25), respectively (Fig. ). Further, age was associated with higher VPT (Fig. ), but not with neuropathic pain (Fig. ). 1c Higher glucose level (unit = 1.6 mmol/l) was associated with worse nerve function for all measures of nerve function and the associations with peroneal and tibial NCV were β = −0.17 SD (−0.21, −0.13) and β = −0.18 SD (−0.23, −0.14), respectively appeared to be strongest. For HbA 1c (unit = 9.6 mmol/mol (3.0%)), similar associations were observed (Fig. ). Both glucose and HbA 1c were also associated with higher VPT and neuropathic pain (Fig. ). Larger waist circumference (unit = 13.1 cm) was associated with lower sural SNAP ( β = −0.08 SD [−0.13, −0.02]) and tibial CMAP amplitude ( β = −0.15 SD [−0.20, −0.10]). Unexpectedly, it was also associated with higher peroneal CMAP amplitude and NCV (Fig. ). Further, waist circumference was associated with higher VPT, β = 0.08 SD (0.04, 0.13) (Fig. ), but not with neuropathic pain (Fig. ). Higher levels of triacylglycerol (unit = 0.9 mmol/l) were not associated with nerve function (Fig. ). However, triacylglycerol was associated with lower tibial nerve function in model 1 (ESM Fig. ). HDL-cholesterol (unit = 0.5 mmol/l) was not associated with better nerve function (Fig. ). LDL-cholesterol (unit = 1.0 mmol/l) appeared to be associated with better nerve function. The use of lipid-modifying medication appeared to be associated with lower nerve function in model 1 (ESM Fig. ), but these associations were attenuated (some even reversed) in the fully adjusted models. Similarly, lipid-modifying medication was associated with higher VPT and neuropathic pain, but not in fully adjusted models (Fig. ). Higher systolic blood pressure (unit = 17.8 mmHg) was not associated with worse nerve function. The use of antihypertensive medication was associated with lower nerve function (Fig. ), specifically with peroneal CMAP amplitude and NCV: β = −0.13 (−0.23, −0.03) and β = −0.16 (−0.25, −0.07), respectively (Fig. ). Blood pressure and use of antihypertensive medication were not associated with VPT or neuropathic pain in fully adjusted models (Fig. ). Inflammation (unit = z score) was associated with worse nerve function: β = −0.04 (−0.07, 0.00) (Fig. ). Associations of inflammation with higher VPT and neuropathic pain were observed, but were not statistically significant in fully adjusted models (Fig. ). However, an interaction effect of diabetes status was observed, and therefore, in ESM Fig. , associations with individual inflammation markers are presented stratified on the presence of type 2 diabetes. Inflammation was only associated with lower nerve function and VPT in those with type 2 diabetes. Current smoking (vs never smoking) was associated with lower nerve function: β = −0.11 SD (−0.17, −0.04) (Fig. ). Former smokers also had lower peroneal NCV: β = −0.12 (−0.20, −0.05) (not shown). Smoking was also associated with higher VPT ( β = 0.17 [0.06, 0.28]) and neuropathic pain (OR 2.13 [1.38, 3.29]) (Fig. ). The associations between cardiometabolic risk factors and absent sural nerve response ( n = 165) are shown in ESM Fig. . We observed greater odds for an absent sural nerve response for higher age, fasting glucose, Hb1 Ac and waist circumference, consistent with the associations observed above. Type 2 diabetes was associated with worse nerve function (Fig. ). Further, prediabetes appeared to be associated with worse nerve function, although this was only statistically significant for peroneal NCV: β = −0.11 SD (−0.21, −0.01). Nonetheless, linear trend analyses showed a consistent trend across NGM, prediabetes and type 2 diabetes; p < 0.01 for all measures of nerve function. Type 2 diabetes was also associated with higher VPT ( β = 0.19 SD [0.10, 0.28]) and neuropathic pain (OR 2.03 [1.39, 2.95]) (Fig. and Fig. , respectively). Further, a trend across NGM, prediabetes and type 2 diabetes was seen for VPT and neuropathic pain (both p < 0.001). In ESM Figs and , associations with nerve conduction measures are shown stratified on type 2 diabetes. Overall, results were similar between those with and those without type 2 diabetes. Fasting glucose and HbA 1c were not, whereas waist circumference was, associated with VPT (ESM Fig. ) and neuropathic pain (ESM Fig. ) in those without type 2 diabetes. Presence of the metabolic syndrome was associated with worse sural SNAP amplitude, tibial CMAP amplitude and tibial NCV: β = −0.13 SD (−0.22, −0.04), β = −0.15 SD (−0.23, −0.06) and β = −0.09 SD (−0.17, 0), respectively (ESM Fig. ). A linear trend was observed across 0–2, 3, 4 and 5 criteria of the metabolic syndrome for four out of six measures of nerve function. Similarly, a trend was seen with increasing number of criteria for the metabolic syndrome and higher VPT ( p = 0.05) and neuropathic pain ( p = 0.027). In additional analyses, we replaced fasting glucose with 2 h post-load OGTT values ( n = 2252). These associations appeared somewhat weaker compared with fasting glucose or HbA 1c (ESM Fig. ). Further, we substituted HDL-cholesterol with total-to-HDL-cholesterol ratio. This yielded similar findings. Substituting systolic with diastolic blood pressure or with blood pressure values derived from 24 h measurement also resulted in similar findings (ESM Fig. ). To our knowledge, this is the largest population-based study examining mutually independent associations of individual cardiometabolic risk factors with peripheral motor and sensory nerve function using electrophysiological techniques as well as clinical measures such as VPT and neuropathic pain. Older age, higher glucose levels, HbA 1c , antihypertensive medication, inflammation and smoking were associated with worse sensory and motor nerve function, without any major differences between these two types of nerves. By and large, the same patterns were seen for VPT and neuropathic pain, except that older age and higher waist circumference were more strongly associated with a poorer VPT and there was no association of age and neuropathic pain. These associations were similar for men and women. While type 2 diabetes was, as expected, clearly and consistently associated with worse nerve function and neuropathic pain, trend analyses showed that prediabetes also appeared to be associated with worse nerve function and neuropathic pain. Previous studies on the relation between prediabetes and nerve function were inconsistent in their findings, which may be due to discrepancies in defining neuropathy or by dichotomising neuropathy as outcome . We used continuous, electrophysiological measures of large-fibre nerve function, in different type of nerves (sensory and motor), as primary outcome in order to detect changes at an early stage. In addition, we studied VPTs (a clinical measure of large-fibre dysfunction) and neuropathic pain, which is more related to small-fibre dysfunction. Higher levels of fasting glucose and HbA 1c , even within the normal range, were associated with lower nerve function. Associations with post-load glucose appeared to be somewhat weaker, which was in line with the Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA)/Cooperative Research in the Region of Augsburg (KORA) study . In contrast, in participants without diabetes, fasting and post-load glucose were not associated with VPT and neuropathic pain. In accordance with our results, waist circumference (or obesity) has previously been associated with neuropathy and diminished nerve function in people with and without diabetes . This effect might be mediated by low-grade inflammation, as inflammation is associated with diminished nerve function in patients with diabetes and in the general population . Interestingly, in our study, inflammation was associated with diminished nerve conduction and higher VPT only in people with type 2 diabetes, suggesting that inflammation is a consequence of long-term metabolic damages that start before overt diabetes and is not a risk factor that initiates large-fibre damage. However, once present, neuro-inflammation might contribute to further progression of nerve damage. Our results on neuropathic pain are partially in line with the higher circulating Il-6 levels in painful diabetic neuropathy as reported in the KORA F4 study, although we observed no association with sICAM-1 . To further delineate the role of inflammation in the development of small-fibre damage, objective techniques such as corneal confocal microscopy or skin biopsy will be needed. The metabolic syndrome has also been associated with diminished nerve function . However, this is not unexpected as individual components of the metabolic syndrome (glucose, waist circumference and, to a lesser extent, antihypertensive medication) were associated with worse nerve function. Results of cholesterol and blood pressure (and, to a lesser extent, triacylglycerol) should be interpreted with caution, as over one-third of the population used lipid-lowering and/or antihypertensive medication. In general, hypertension and hypercholesteraemia are treated early in the Netherlands. Consequently, in this relatively healthy and well-treated population, ranges of lipids and blood pressure might be too narrow to observe associations. Nevertheless, antihypertensive medication (which suggests a history of exposure to hypertension) was associated with lower nerve function and, to a lesser extent, with neuropathic pain. Hypertension may affect nerve function by damage of the (nerve) microcirculation. Moreover, as statin use is common in the treatment of diabetes, fasting LDL-cholesterol levels were actually lower in people with type 2 diabetes, compared with people with NGM. This may explain the unexpected finding that LDL-cholesterol was associated with better nerve function. In the Addition study, lower LDL levels were associated with a higher risk of developing diabetic polyneuropathy, and also these authors could not exclude an effect of statins in their analyses . We used electrophysiology, enabling us to detect on a continuous scale differences in large-fibre nerve function that cannot be detected on clinical examination, and studied both motor and sensory nerves. In contrast to earlier studies, we could not observe a difference in the associations of cardiometabolic risk factors with sural or motor nerve function . Partly, this may be due to 165 cases of undetectable sural nerve response in our study. As an undetectable response indicates poor nerve function, this may have led to underestimation in effect size of the associations studied. Further, due to the cross-sectional design of our study, we cannot exclude that cardiovascular risk factors impact on the sural nerve at an earlier stage. For this we need longitudinal data. Previous studies have indicated that axonal damage (typically reflected as lower CMAP or SNAP amplitudes) is more common in diabetes than demyelinating damage (typically reflected by lower NCV) . Indeed, age and waist circumference had higher magnitudes of associations with low CMAP and SNAP amplitudes as compared with NCV, but these differences were not statistically significant. Thus, whether different cardiometabolic risk factors may affect nerve axons or myelin differentially is unclear from our results, but, if present, such a differential effect seems limited. Moreover, we did not examine small fibres with objective measures and it has been suggested that obesity/hyperlipidaemia and hyperglycaemia may have differential effects on small vs large nerve fibres . A complex interplay between several mechanisms including hyperglycaemia, lipotoxicity, oxidative stress and inflammation is thought to play a central role in the pathogenesis of (diabetic) neuropathy . We recently reported that microvascular function was diminished not only in people with type 2 diabetes, but also with prediabetes ; age, smoking and prior exposure to hypertension and dyslipidaemia, and in particular higher levels of glucose (also in the normal range), were all associated with microvascular function . Observations in the current study are in line with these results, suggesting similar risk factors for generalised microvascular damage and early-stage nerve damage. Most likely, preventive or therapeutic measures that target all of these risk factors may be clinically beneficial. However, in contrast to several other microvascular complications, intensive blood glucose control had only a very modest effect in preventing large-fibre neuropathy in type 2 diabetes , and also multi-modal interventions, such as in the STENO-2 or the Look AHEAD (Action for Health in Diabetes) studies , seemed unsuccessful. Hence, prevention of large-fibre neuropathy should probably start at the earliest signs of diabetes, maybe even in the prediabetic stage, and the role of inflammation in the progression of subclinical to clinical neuropathy should be further explored. Prediabetes is also associated with abnormalities of the central nervous system , and it remains to be determined whether the risk factors for central nerve abnormalities are the same as those for peripheral nerves. Strengths of this study include the use of nerve conduction testing as an objective measure of nerve function that provides insight into nerve damage at very early stages. We investigated different types of nerve and anatomical parts of the nerve and we also included clinical measures in a large, population-based study of adults (aged 40–75 years). Lastly, our statistical models were mutually adjusted and adjusted for many potential confounders. Nonetheless, residual confounding by non-measured factors may still have occurred. Other limitations include its cross-sectional design, and thus inferences regarding causality should be made with caution. As we did not observe major differences between the individual nerves, we summarised our electrophysiological findings in a sum-score, but this should be viewed as a post hoc analysis. The clinical relevance of the observed associations should be investigated in future studies. Further, waist circumference is a crude measure for adiposity and the underlying biological mechanisms explaining the associations between waist circumference and nerve function should be scrutinised. Finally, the inclusion of a relatively healthy population in the Maastricht Study and the exclusion of participants with incomplete assessments of nerve function may have resulted in selection bias, as these participants were older and more often had diabetes. This may have led to an underestimation of the associations observed. In conclusion, in adults aged 40–75 years, blood glucose (fasting glucose or HbA 1c ), even in the non-diabetic range, was most consistently associated with (sensorimotor) peripheral nerve function and neuropathic pain. Similarly, those with type 2 diabetes, and to a lesser degree those with prediabetes, had worse nerve function. A larger waist circumference, smoking and use of antihypertensive medication (suggestive of history of exposure to hypertension), independent of glucose and other risk factors, were associated with worse nerve function, and similar patterns were observed with VPT and neuropathic pain. The association with low-grade inflammation was most pronounced in participants with type 2 diabetes. These results imply that early-stage nerve damage may result not only from glycaemic damage, but also from other cardiometabolic risk markers. Consequently, multifactorial approaches should be considered in the prevention of neuropathy, rather than a sole focus on blood glucose. ESM (PDF 1.25 mb)
Effectiveness of an interactive online group intervention based on pain neuroscience education and graded exposure to movement in breast cancer survivors with chronic pain: a randomised controlled trial
7bfd3581-926f-4a33-9eab-840222bed4b8
11458701
Patient Education as Topic[mh]
Breast cancer is the most common type of cancer diagnosed in women , and it is estimated that the incidence of new cases will increase worldwide in the next decades. Survival rates are also increasing, but the survivorship phase is often associated with several cancer-related symptoms such as chronic pain, which can affect women’s quality of life and their social and professional reintegration . As a result, there is an increasing demand for health care that addresses the chronic sequelae of cancer survivorship (e.g. chronic pain, fatigue). In addition, the side effects associated with prolonged use of pain medication make the development and improvement of non-pharmacological treatments essential . Pain neuroscience education (PNE) is a cognitive-based intervention that aims to reconceptualise pain by explaining the neurophysiological mechanisms of pain and empowering people to manage their pain experience . PNE has reported broad benefits in addressing chronic pain in different populations, whether applied in isolation or as an adjuvant therapy , but it has been scarcely investigated in breast cancer . González-Martín et al. pointed out that PNE is an effective intervention for reducing pain intensity and the level of catastrophizing in patients with cancer pain, but no benefits were found in relation to quality of life. These authors, together with other previous reviews , highlighted the need for further studies investigating the benefits of patient pain education programmes based on a biopsychosocial content focused on the understanding of acute and chronic pain mechanisms, the identification of the key factors related to each individual painful experience or the relationship between pain, and our lifestyle habits, among others. Therapeutic exercise is an important tool in the oncology field for improving quality of life, so it can be recommended as a therapy to be combined with patient education . In this line, graded exposure to movement (GEM) is a movement-based intervention that uses therapeutic exercise following the “twin peaks” metaphor proposed by Butler . This metaphor attempts to symbolise how the gradual movement up to a painful baseline could help the system to progressively adapt and achieve more functionality with less pain. GEM has reported benefits in addressing chronic pain in several musculoskeletal conditions previously , but it has not been investigated in the cancer population. In this clinical trial, yoga was applied following the basis of a graded exposure to movement intervention (GEM-Y), as yoga has been shown to be an effective exercise modality for improving quality of life in adults with cancer . Furthermore, yoga is a mind–body exercise modality that allows us to follow a biopsychosocial approach . To our knowledge, the combination of PNE with GEM-Y has never been studied in cancer previously. Therefore, the purpose of this clinical trial was to evaluate whether an interactive online group intervention combining PNE and GEM-Y is more effective than usual care in improving quality of life (primary outcome) and pain experience (secondary outcomes) in breast cancer survivors with chronic pain. Study design A randomised controlled clinical trial was carried out according to the Consolidated Standards of Reporting Trials (CONSORT) Statement . The Template for Intervention, Description and Replication Checklist (TIDieR) was used as a guide to provide transparency and make the intervention replicable. The protocol of this study has been registered on clinicaltrials.org with the registry number NCT04965909. Protocol deviations Only one deviation from the registered protocol needs to be reported. The method of data analysis was registered as an intention-to-treat analysis, but due to the adherence rates it was decided to perform a per-protocol analysis. Inclusion and exclusion criteria The inclusion and exclusion criteria were developed following the PICOs model. Inclusion criteria are as follows: 1) women aged between 18 and 65 years; 2) diagnosis of stage 0–III breast cancer; 3) primary treatment (surgery, radiotherapy, and chemotherapy) completed at least 3 months ago but may still be receiving hormone therapy; 4) informed pain related to primary treatment in the last 6 months; 5) access to the Internet and an electronic device that allows the use of the applications used in this study and skills for their use or assistance from a close person who has them; 6) ability to communicate fluently verbally and in writing in the language of the research team (Spanish); and 7) approval to participate in the study by the coordinator of the health team that assisted during the course of cancer and its treatment. Exclusion criteria are as follows: 1) another previous type of cancer or breast cancer recurrence in a period of less than 1 year; 2) medical diagnosis of a neurological or autoimmune disease that limits or prevents exercise; 3) some type of pathology that is associated with a contraindication to physical exercise; and 4) the diagnosis of serious psychiatric or neurologic disorders that do not allow the participant to follow orders. Sampling method and sample’s size calculation For sampling, non-probabilistic convenience and snowball methods were used. The sample size was calculated based on a previous study with partial Eta2 effect size of 0.049 for the time * group interaction in the FACT-B score. Considering two groups, four measurements, a type I risk or α 0.05, type II risk or β 0.20 (study power of 80%), and an estimated dropout rate of 15%, a total of 40 participants (20 per group) are needed to be enrolled. Sample size was calculated using the G*Power software, version 3.1.9.7 (Heinrich-Heine University, Düsseldorf, Germany). Subjects’ recruitment The sample for this study was recruited through the dissemination of the project using social networks (Facebook, Instagram) and with the collaboration of three Spanish breast cancer survivor support associations (Amama Sevilla, AGAMAMA, and ASAMMA). Participation in the study was voluntary and all participants were facilitated a written informed consent that must be signed to be part of the clinical trial. Group assignment and masking For assignment, a random method was carried out using an online tool called ‘random allocation software’ (2.0 version). A stratified allocation was applied according to the women’s age (≤ 45 years old or > 45 years old). On each of the strata, a randomisation was carried out by blocks of constant size. The assignment sequence was hidden from the evaluator and the study subjects through an automated assignment system. The preparation of the sequence, the inclusion of the individuals in each group, and the assignment of the treatments were carried out by different members of the research team. On the other hand, the main researcher was blinded. Nonetheless, the physiotherapist and subjects were not blinded because of the type of intervention. Outcomes and data collection The primary outcome of this trial was quality of life; secondary outcomes were related to chronic pain experiences: pain intensity and pain interference, catastrophizing level, pain self-efficacy, kinesiophobia, and fear-avoidance behaviours. Quality of life was evaluated by the Spanish version of The Functional Assessment of Cancer Therapy—Breast plus arm morbidity (FACT–B + 4) . It was originally validated by Brady et al. (1997) and later the arm subscale was developed and incorporated into the existing FACT-B by Coster et al. (2001) . It is a 41-item instrument designed to measure six domains of quality of life in breast cancer patients: physical well-being (PWB), social well-being (SWB), emotional well-being (EWB), functional well-being (FWB), breast-cancer subscale (BCS), and lymphedema (ARM) subscale. The overall score ranges from 0 to 148 points. A higher score translates into a better quality of life. The alpha coefficient (internal consistency) and the test–retest reliability of the Spanish version of the FACT-B + 4 were high (alpha = 0.87; intraclass correlation coefficient: 0.986) . The Spanish version of the Modified Brief Pain Inventory—Short Form (BPI-SF) was used to assess pain intensity and pain interference with daily activities . It is an 11-item instrument which has been previously assessed in the cancer population for this purpose . The questionnaire has two subscales, one related to pain intensity (four items) and another related to the pain interference with activities of daily living (seven items). All items are scored on a scale from 0 to 10 and each dimension is calculated as an average, with a higher score indicating greater intensity or greater impact on daily life. The internal consistency and the test–retest reliability between dimensions were good (0.87 and 0.89) and low to moderate (0.53 and 0.77), respectively . The Spanish version of the Pain Catastrophizing Scale was used to evaluate pain catastrophizing . This scale is among the most valid instruments to assess this complex construct defined as “to view or present pain or pain-related problems as considerably worse than they actually are” . The scale consists of three subscales (rumination, magnification, and helplessness), whose items will be valued from 0 (nothing) to 4 (all the time) to obtain a total score that ranges from 0 to 52. A higher score translates into a higher level of catastrophizing. The scale has adequate internal consistency (Cronbach’s alpha = 0.79), test–retest reliability (intraclass correlation coefficient = 0.84), and sensitivity to change (effect size ≥ 2) . The Spanish version of the Pain Self-Efficacy Questionnaire (PSEQ) was chosen to assess self-efficacy level related to pain . It is a 22-item instrument, and each item is scored from 0 to 10. 0 is equal to “I think I am totally incapable” and 10 is equal to “I think I am totally capable”. The total score ranges from 0 to 220. A higher score on the questionnaire corresponds to a higher level of self-efficacy. The internal consistency and the test–retest reliability between dimensions were 0.91 and 0.75) . This measure has been previously used in cancer survivors with pain . The Tampa Scale for Kinesiophobia (TSK-11) Spanish version was chosen to assess the level of kinesiophobia . This scale is one of the most used to evaluate kinesiophobia in patients with pain, including breast cancer population . It is composed of two factors (avoidance of activity and harm) with a total of 11 items that are valued from 1 (totally disagree) to 4 (totally agree). The total score obtained ranges from 11 to 44. More punctuation shows a higher kinesiophobia level. The internal consistencies (Cronbach’s alpha = 0.79) found for this scale are good . Finally, to fear-avoidance behaviours, we used the Fear Avoidance Components Scale Questionnaire—Spanish Version (FACS–SP) . It is a questionnaire that allows us to evaluate a patient’s fear of pain and consequent avoidance of physical activity due to fear. The questionnaire consists of 20 items in which a patient rates his agreement with each statement on a 6-point Likert scale, where 0 = completely disagree, 6 = completely agree. There is a maximum score of 100. A higher score indicates more strongly held fear-avoidance beliefs. Five severity levels are available for clinical interpretation: subclinical (0–20), mild (21–40), moderate (41–60), severe (61–80), and extreme (81–100) . It has been previously used in the breast cancer population . In addition to these questionnaires, qualitative data were collected in an online interview. The information was collected through a semi-structured interview based on four pre-defined topics: pain experience (intensity, location, onset, evolution, factors that aggravate and relieve pain), pain coping strategies (e.g. analgesics, therapeutic exercise, physiotherapy), lifestyle habits (e.g. regular exercise or diet), and any notable milestones that might affect their pain experience (e.g. major work or family changes). The interviews were not recorded, but were transcribed. Responses were analysed inductively by one researcher (PMM), who identified similar themes for each topic. A weekly online diary was used to collect information on the acquisition of key concepts from the sessions. Finally, participants were asked about their satisfaction with the programme. All outcomes and qualitative information were collected by two trained and blinded evaluators at four different timepoints: before intervention (T0), after 4-week PNE (T1), after 12-week complete intervention PNE + GEM-Y (T2), and after 3 months of follow-up (T3). Participants’ satisfaction was evaluated at T1 and T2. The outcomes were assessed using the above instruments that participants completed by themselves. Description of the intervention in the experimental and control group An interactive online focused-person therapeutic programme, based on Rogers’ person-centred care approach and combining PNE and GEM-Y, was implemented in the experimental group. All sessions were developed and supervised in-person by a trained physiotherapist using the videoconferencing platform of the University of Sevilla. In addition, WhatsApp and e-mail were used during the intervention to provide additional support, educational materials, or to answer queries. The sessions were applied in groups of 10–15 participants. The duration of the programme was 3 months, and it was divided into two parts. The first included 8 sessions of PNE during the first month (2 sessions per week, 1 h/session), and the second included 16 sessions of GEM-Y during the following 2 months (2 sessions per week, 1 h/session). Figure summarises the structure of the intervention. PNE sessions focused on explaining the mechanisms of pain, explaining pain as an individual experience, and linking pain to lifestyle factors . A detailed description of the proposed intervention has been reported previously . Attendance or non-attendance, with the reasons for non-attendance, was recorded for each session. In addition, a weekly individual online pain diary was used as a home-based work method. Participants in the control group did not receive any additional educational or movement-based intervention during the study period. They continued with their usual care for cancer-related symptoms and medical appointments. After the follow-up period, they were offered the content of the programme. An online educational booklet was provided to both groups. This booklet provided educational information in a very short format, addressing the following topics: breast cancer and its most common sequelae, biopsychosocial model, acute pain and chronic pain mechanisms, therapeutic exercise, and chronic pain. Method for data analysis The software IBM Statistics Package for Social Science®, v.29 (IBM Corp, NY, USA) was used to perform the statistical processing of data following a per protocol analysis. It was established that only the data of those participants in the experimental group who had attended at least 50% of the sessions would be analysed (per protocol analysis). The normal distribution of the variables was assessed with the Shapiro–Wilk test. Descriptive data are reported as mean ± standard deviation, or median (interquartile range Q3–Q1). For the variables where the four measurements followed a normal distribution, a mixed factorial ANOVA was used with group as the between-subject factor and time as the within-subject factor (partial Eta squared coefficient ηp2 effect size). Prior to ANOVA analysis, the Mauchly test was used to check the sphericity assumption; if the sphericity hypothesis was not met, the Greenhouse–Geisser correction was used. In addition, for variables where no normal distribution was observed at any time point, comparisons within and between groups were assessed using the Student t test/Welch t test (Cohen’s d effect size) or the Mann–Whitney U test (Rosenthal’s r effect size) with Bonferroni corrections. All statistical tests were performed considering a confidence interval (hereinafter CI) of 95% ( p -value < 0.05). A randomised controlled clinical trial was carried out according to the Consolidated Standards of Reporting Trials (CONSORT) Statement . The Template for Intervention, Description and Replication Checklist (TIDieR) was used as a guide to provide transparency and make the intervention replicable. The protocol of this study has been registered on clinicaltrials.org with the registry number NCT04965909. Only one deviation from the registered protocol needs to be reported. The method of data analysis was registered as an intention-to-treat analysis, but due to the adherence rates it was decided to perform a per-protocol analysis. The inclusion and exclusion criteria were developed following the PICOs model. Inclusion criteria are as follows: 1) women aged between 18 and 65 years; 2) diagnosis of stage 0–III breast cancer; 3) primary treatment (surgery, radiotherapy, and chemotherapy) completed at least 3 months ago but may still be receiving hormone therapy; 4) informed pain related to primary treatment in the last 6 months; 5) access to the Internet and an electronic device that allows the use of the applications used in this study and skills for their use or assistance from a close person who has them; 6) ability to communicate fluently verbally and in writing in the language of the research team (Spanish); and 7) approval to participate in the study by the coordinator of the health team that assisted during the course of cancer and its treatment. Exclusion criteria are as follows: 1) another previous type of cancer or breast cancer recurrence in a period of less than 1 year; 2) medical diagnosis of a neurological or autoimmune disease that limits or prevents exercise; 3) some type of pathology that is associated with a contraindication to physical exercise; and 4) the diagnosis of serious psychiatric or neurologic disorders that do not allow the participant to follow orders. For sampling, non-probabilistic convenience and snowball methods were used. The sample size was calculated based on a previous study with partial Eta2 effect size of 0.049 for the time * group interaction in the FACT-B score. Considering two groups, four measurements, a type I risk or α 0.05, type II risk or β 0.20 (study power of 80%), and an estimated dropout rate of 15%, a total of 40 participants (20 per group) are needed to be enrolled. Sample size was calculated using the G*Power software, version 3.1.9.7 (Heinrich-Heine University, Düsseldorf, Germany). The sample for this study was recruited through the dissemination of the project using social networks (Facebook, Instagram) and with the collaboration of three Spanish breast cancer survivor support associations (Amama Sevilla, AGAMAMA, and ASAMMA). Participation in the study was voluntary and all participants were facilitated a written informed consent that must be signed to be part of the clinical trial. For assignment, a random method was carried out using an online tool called ‘random allocation software’ (2.0 version). A stratified allocation was applied according to the women’s age (≤ 45 years old or > 45 years old). On each of the strata, a randomisation was carried out by blocks of constant size. The assignment sequence was hidden from the evaluator and the study subjects through an automated assignment system. The preparation of the sequence, the inclusion of the individuals in each group, and the assignment of the treatments were carried out by different members of the research team. On the other hand, the main researcher was blinded. Nonetheless, the physiotherapist and subjects were not blinded because of the type of intervention. The primary outcome of this trial was quality of life; secondary outcomes were related to chronic pain experiences: pain intensity and pain interference, catastrophizing level, pain self-efficacy, kinesiophobia, and fear-avoidance behaviours. Quality of life was evaluated by the Spanish version of The Functional Assessment of Cancer Therapy—Breast plus arm morbidity (FACT–B + 4) . It was originally validated by Brady et al. (1997) and later the arm subscale was developed and incorporated into the existing FACT-B by Coster et al. (2001) . It is a 41-item instrument designed to measure six domains of quality of life in breast cancer patients: physical well-being (PWB), social well-being (SWB), emotional well-being (EWB), functional well-being (FWB), breast-cancer subscale (BCS), and lymphedema (ARM) subscale. The overall score ranges from 0 to 148 points. A higher score translates into a better quality of life. The alpha coefficient (internal consistency) and the test–retest reliability of the Spanish version of the FACT-B + 4 were high (alpha = 0.87; intraclass correlation coefficient: 0.986) . The Spanish version of the Modified Brief Pain Inventory—Short Form (BPI-SF) was used to assess pain intensity and pain interference with daily activities . It is an 11-item instrument which has been previously assessed in the cancer population for this purpose . The questionnaire has two subscales, one related to pain intensity (four items) and another related to the pain interference with activities of daily living (seven items). All items are scored on a scale from 0 to 10 and each dimension is calculated as an average, with a higher score indicating greater intensity or greater impact on daily life. The internal consistency and the test–retest reliability between dimensions were good (0.87 and 0.89) and low to moderate (0.53 and 0.77), respectively . The Spanish version of the Pain Catastrophizing Scale was used to evaluate pain catastrophizing . This scale is among the most valid instruments to assess this complex construct defined as “to view or present pain or pain-related problems as considerably worse than they actually are” . The scale consists of three subscales (rumination, magnification, and helplessness), whose items will be valued from 0 (nothing) to 4 (all the time) to obtain a total score that ranges from 0 to 52. A higher score translates into a higher level of catastrophizing. The scale has adequate internal consistency (Cronbach’s alpha = 0.79), test–retest reliability (intraclass correlation coefficient = 0.84), and sensitivity to change (effect size ≥ 2) . The Spanish version of the Pain Self-Efficacy Questionnaire (PSEQ) was chosen to assess self-efficacy level related to pain . It is a 22-item instrument, and each item is scored from 0 to 10. 0 is equal to “I think I am totally incapable” and 10 is equal to “I think I am totally capable”. The total score ranges from 0 to 220. A higher score on the questionnaire corresponds to a higher level of self-efficacy. The internal consistency and the test–retest reliability between dimensions were 0.91 and 0.75) . This measure has been previously used in cancer survivors with pain . The Tampa Scale for Kinesiophobia (TSK-11) Spanish version was chosen to assess the level of kinesiophobia . This scale is one of the most used to evaluate kinesiophobia in patients with pain, including breast cancer population . It is composed of two factors (avoidance of activity and harm) with a total of 11 items that are valued from 1 (totally disagree) to 4 (totally agree). The total score obtained ranges from 11 to 44. More punctuation shows a higher kinesiophobia level. The internal consistencies (Cronbach’s alpha = 0.79) found for this scale are good . Finally, to fear-avoidance behaviours, we used the Fear Avoidance Components Scale Questionnaire—Spanish Version (FACS–SP) . It is a questionnaire that allows us to evaluate a patient’s fear of pain and consequent avoidance of physical activity due to fear. The questionnaire consists of 20 items in which a patient rates his agreement with each statement on a 6-point Likert scale, where 0 = completely disagree, 6 = completely agree. There is a maximum score of 100. A higher score indicates more strongly held fear-avoidance beliefs. Five severity levels are available for clinical interpretation: subclinical (0–20), mild (21–40), moderate (41–60), severe (61–80), and extreme (81–100) . It has been previously used in the breast cancer population . In addition to these questionnaires, qualitative data were collected in an online interview. The information was collected through a semi-structured interview based on four pre-defined topics: pain experience (intensity, location, onset, evolution, factors that aggravate and relieve pain), pain coping strategies (e.g. analgesics, therapeutic exercise, physiotherapy), lifestyle habits (e.g. regular exercise or diet), and any notable milestones that might affect their pain experience (e.g. major work or family changes). The interviews were not recorded, but were transcribed. Responses were analysed inductively by one researcher (PMM), who identified similar themes for each topic. A weekly online diary was used to collect information on the acquisition of key concepts from the sessions. Finally, participants were asked about their satisfaction with the programme. All outcomes and qualitative information were collected by two trained and blinded evaluators at four different timepoints: before intervention (T0), after 4-week PNE (T1), after 12-week complete intervention PNE + GEM-Y (T2), and after 3 months of follow-up (T3). Participants’ satisfaction was evaluated at T1 and T2. The outcomes were assessed using the above instruments that participants completed by themselves. An interactive online focused-person therapeutic programme, based on Rogers’ person-centred care approach and combining PNE and GEM-Y, was implemented in the experimental group. All sessions were developed and supervised in-person by a trained physiotherapist using the videoconferencing platform of the University of Sevilla. In addition, WhatsApp and e-mail were used during the intervention to provide additional support, educational materials, or to answer queries. The sessions were applied in groups of 10–15 participants. The duration of the programme was 3 months, and it was divided into two parts. The first included 8 sessions of PNE during the first month (2 sessions per week, 1 h/session), and the second included 16 sessions of GEM-Y during the following 2 months (2 sessions per week, 1 h/session). Figure summarises the structure of the intervention. PNE sessions focused on explaining the mechanisms of pain, explaining pain as an individual experience, and linking pain to lifestyle factors . A detailed description of the proposed intervention has been reported previously . Attendance or non-attendance, with the reasons for non-attendance, was recorded for each session. In addition, a weekly individual online pain diary was used as a home-based work method. Participants in the control group did not receive any additional educational or movement-based intervention during the study period. They continued with their usual care for cancer-related symptoms and medical appointments. After the follow-up period, they were offered the content of the programme. An online educational booklet was provided to both groups. This booklet provided educational information in a very short format, addressing the following topics: breast cancer and its most common sequelae, biopsychosocial model, acute pain and chronic pain mechanisms, therapeutic exercise, and chronic pain. The software IBM Statistics Package for Social Science®, v.29 (IBM Corp, NY, USA) was used to perform the statistical processing of data following a per protocol analysis. It was established that only the data of those participants in the experimental group who had attended at least 50% of the sessions would be analysed (per protocol analysis). The normal distribution of the variables was assessed with the Shapiro–Wilk test. Descriptive data are reported as mean ± standard deviation, or median (interquartile range Q3–Q1). For the variables where the four measurements followed a normal distribution, a mixed factorial ANOVA was used with group as the between-subject factor and time as the within-subject factor (partial Eta squared coefficient ηp2 effect size). Prior to ANOVA analysis, the Mauchly test was used to check the sphericity assumption; if the sphericity hypothesis was not met, the Greenhouse–Geisser correction was used. In addition, for variables where no normal distribution was observed at any time point, comparisons within and between groups were assessed using the Student t test/Welch t test (Cohen’s d effect size) or the Mann–Whitney U test (Rosenthal’s r effect size) with Bonferroni corrections. All statistical tests were performed considering a confidence interval (hereinafter CI) of 95% ( p -value < 0.05). A total of 107 breast cancer survivors were recruited, of which 58 did not meet the selection criteria, obtaining a sample of 49 women (27 were randomly assigned to the control group and 22 to the experimental group). Five participants drop out in the control group and eight in the experimental group. The reasons for dropout were all related to the difficulty of fitting the intervention schedule around other responsibilities in daily life (e.g. work, family, medical). Finally, a total of 36 participants were analysed (22 in the control group and 14 in the experimental group). No adverse or harmful events were reported in either group. The flow diagram of the trial is presented in Fig. . Baseline characteristics for each group are shown in Table . Primary outcome: quality of life (FACT—B + 4) Tables and show the results of the primary outcome. The mixed factorial ANOVA analysis revealed a significant time*group interaction for the overall quality of life score ( F (3, 102) = 4.80, p = 0.010; η p 2 = 0.124), but not for the physical, functional, and breast cancer subscales (Table ). A significant difference in favour of the experimental group was also observed for the emotional subscale at the follow-up assessment, but not for any of the other FACT-B + 4 dimensions (Table ). Secondary outcomes: pain intensity (BPI), pain interference (BPI), kinesiophobia (TSK-11), catastrophizing level (PCS), fear—avoidance behaviours (FACS-SP), and self-efficacy (PSEQ) Table shows the results of the secondary outcomes. Significant differences in favour of the experimental group were found for pain intensity ( p = 0.004, d = 1.44), pain interference ( p < 0.001, d = 2.08), catastrophizing level ( p = 0.039, r = 0.41), and pain self-efficacy ( p = 0.009, r = 0.50). These differences persisted at follow-up (T3). No differences were found for kinesiophobia or fear-avoidance behaviours. Secondary outcomes: qualitative data analysis The main themes that emerged from the interviews were prolonged rest and overexertion (overdoing exercise) as factors that aggravate pain; movement and well-dosed exercise as factors that relieve pain; and regular exercise as a lifestyle habit that improves the experience of pain. The analysis of the pain diaries showed that the acquisition of the key concepts of the weekly PNE sessions was higher than 75%, except for the content of week 3, where it was slightly lower. Regarding the yoga sessions, almost 100% of participants perceived an improvement in their pain-related functionality during the GEM-Y. Overall, participants’ satisfaction with the programme was quite good, with most participants saying they were “very satisfied”. The most highly rated aspects were the usefulness of the programme, the practical content, and the quality of the explanations. The aspects that received a lower average score were the group and online work experience. + 4) Tables and show the results of the primary outcome. The mixed factorial ANOVA analysis revealed a significant time*group interaction for the overall quality of life score ( F (3, 102) = 4.80, p = 0.010; η p 2 = 0.124), but not for the physical, functional, and breast cancer subscales (Table ). A significant difference in favour of the experimental group was also observed for the emotional subscale at the follow-up assessment, but not for any of the other FACT-B + 4 dimensions (Table ). Table shows the results of the secondary outcomes. Significant differences in favour of the experimental group were found for pain intensity ( p = 0.004, d = 1.44), pain interference ( p < 0.001, d = 2.08), catastrophizing level ( p = 0.039, r = 0.41), and pain self-efficacy ( p = 0.009, r = 0.50). These differences persisted at follow-up (T3). No differences were found for kinesiophobia or fear-avoidance behaviours. The main themes that emerged from the interviews were prolonged rest and overexertion (overdoing exercise) as factors that aggravate pain; movement and well-dosed exercise as factors that relieve pain; and regular exercise as a lifestyle habit that improves the experience of pain. The analysis of the pain diaries showed that the acquisition of the key concepts of the weekly PNE sessions was higher than 75%, except for the content of week 3, where it was slightly lower. Regarding the yoga sessions, almost 100% of participants perceived an improvement in their pain-related functionality during the GEM-Y. Overall, participants’ satisfaction with the programme was quite good, with most participants saying they were “very satisfied”. The most highly rated aspects were the usefulness of the programme, the practical content, and the quality of the explanations. The aspects that received a lower average score were the group and online work experience. This clinical trial aimed to evaluate the effectiveness of an interactive online group intervention combining PNE and GEM-Y in improving the quality of life (primary outcome) and pain experience (secondary outcomes) in breast cancer survivors with chronic pain, compared with usual care. The results showed a significant time per group interaction in favour of the experimental group for the overall quality of life measure. In addition, the intervention appears to be more effective than control for pain experience outcomes, except for the level of kinesiophobia, as significant differences were observed immediately after intervention (T2) and at the follow-up period (T3). Our results are in contrast with previous research on the effectiveness of pain education in improving quality of life in people with cancer-related pain. González-Martín et al. conducted a systematic review on this topic and concluded that patient education did not improve quality of life in patients with cancer. However, our research has demonstrated a significant time per group interaction in favour of the experimental group after a 4-week PNE intervention for the overall quality-of-life score of the FACT-B + 4. In addition, our results for the total score are clinically relevant as the between-group difference in their within-group score changes (T0–T2) is above the minimum clinically important difference for this measure . This controversy with previous research could be explained by the fact that some of the pain education interventions studied were based on a more general concept of pain education, which was not in line with the principles of PNE . In our case, a patient-centred approach was followed and our content was based on a biopsychosocial approach. Regarding pain experience outcomes, our results are partially consistent with previous research. We observed significant differences in favour of the experimental group across all assessment timepoints for pain intensity, pain interference, and catastrophizing. Similarly, Pas et al. (2020) conducted a pilot study on the effect of PNE on persistent pain in breast cancer survivors and concluded that PNE seemed to have a beneficial effect in improving pain intensity and level of catastrophizing in this population. However, when PNE is used as a preventive strategy for chronic pain, the results are controversial. Manfuku et al. (2021) observed that perioperative PNE was more beneficial than general biomedical information for the prevention of pain chronification after breast cancer surgery, while Dams et al. (2022) concluded that preoperative PNE had no significant effects on pain-related disability or pain intensity 18 months after surgery. Finally, González-Martín et al. concluded that PNE has a positive effect on pain intensity and kinesiophobia, which is partially supported by our results, since we observed important benefits on pain intensity and pain interference, but the level of kinesiophobia reported by the participants did not change after the PNE block or the GEM-Y sessions. In general, although PNE interventions seem to have a positive effect on reducing pain intensity in cancer-related pain, the limitations of previous research on this topic and our own force us to interpret these findings with caution. Our findings regarding the effect of yoga on quality of life are consistent with previous studies that support the recommendation of yoga as a therapeutic intervention to improve quality of life in breast cancer . In contrast, the effects of yoga on cancer-related pain are controversial . However, we found important benefits for chronic pain, as pain intensity improved after the yoga intervention, and pain interference and pain self-efficacy only improved when PNE was used in conjunction with the GEM-Y intervention. This discrepancy with previous research could be explained by the fact that our intervention, unlike traditional yoga interventions, placed particular emphasis on delivering it according to the principles of a GEM intervention, following the “twin peaks” metaphor explained previously , so that the progression of the yoga exercises was directly related to the evolution of the individual’s pain experience. The relevance of results of this study should be interpreted considering some methodological limitations and strengths. Some participants ( n = 8) failed to attend sessions, which forced us to develop a per-protocol analysis and resulted in a smaller sample size in the experimental group. In addition, the reasons for non-attendance were related to the difficulty of fitting two sessions per week over 12 weeks with other daily responsibilities, so a shorter intervention may have been more appropriate in this population. Thirdly, a follow-up period of 3 months could be considered short; however, when educational interventions are presented in an online modality, it is common to consider this time period . Finally, the proposed snowball sampling method could limit the generalisability of our results, as well as the representativeness of the subjects analysed. To the best of our knowledge, this is the first clinical trial to combine PNE and GEM-Y in breast cancer population. Secondly, the online modality had some advantages such as the accessibility of the program regardless of the participant’s location or the reduction of costs and resources . In addition, the choice of this modality was based on the results of a previous review , which indicated that the mixed format of patient education (i.e. face-to-face meetings plus online information) was the most beneficial for improving quality of life in breast cancer survivors . These findings lead us to hypothesise that an online educational intervention with direct person-therapist interaction could be a more contemporary version of the mixed format, supported by new technologies. A 12-week interactive online group intervention based on PNE plus GEM-Y appears to be more effective than usual care in improving quality of life in breast cancer survivors with chronic pain. A time per group interaction was observed for the FACT-B + 4 overall quality-of-life score. The intervention also appears to be more effective than usual care for improving the participants’ pain experiences. Most of the effects of the intervention were maintained at the 3-month follow-up. However, due to the study limitations, these res ults must be interpreted with caution and further research is needed.
null
e843f7f3-0522-43b9-b73c-d742c5d2bc9b
11171787
Microbiology[mh]
Soil salinity and alkalinity represent one of the major global environmental issues that threaten food security and agricultural sustainability. The total area of saline–alkali land in China is about 9.91 × 10 7 ha . In northeast China, the saline–alkali land mainly contains high concentration of carbonates, including Na 2 CO 3 and NaHCO 3 ; therefore, it is also known as Soda saline–alkali land . The presence of sodium ions in the soil induces stress, which significantly hampers plant growth . On the other hand, the saline–alkali stress caused by carbonates has more serious impacts on plants than salt stress . Because, in addition to osmotic stress, ion toxicity, and oxidative stress, which are usually caused by salt stress , alkali stress also causes nutritional deficiency and high pH stress . The nutrient deficiency mainly refers to the unavailability of nutrients such as phosphorus and iron ions due to the precipitation in alkaline soil . Compound saline and alkali stresses can lead to more severe oxidative stress, disruption of ion homeostasis, and metabolic disorders , leading to damage to cell structure and activities, thereby inhibiting plant growth and development. To re–establish osmotic and ion homeostasis to adapt to adverse environments, plant cells rapidly accumulate inorganic ions and small organic molecules such as betaine, proline, polyamines, polyols, and sugars . For instance, the trehalose biosynthesis process in quinoa leaves is significantly affected under saline–sodic stress . In addition to highly accumulated lipids and amino acids, the energy metabolism and reactive oxygen species (ROS)-scavenging machineries in rice leaves are significantly enhanced under severe saline–sodic stress, such as tricarboxylic acid (TCA) cycle and glutathione metabolism . The accumulation of proline and its derivatives in rice leaves facilitates osmotic balance control, thereby improving plant saline–alkali tolerance . Considering the potential threats of saline–alkali stress on ecosystems and food security, significant efforts are needed to improve saline–alkali land. Plant growth–promoting rhizobacteria (PGPRs) play a crucial role in plant growth . For instance, inoculation of Sinorhizobium meliloti GL1 and Enterobacter ludwigii MJM–11 improves the yield, nodulation, and quality of alfalfa in a saline–alkali environment . Enterobacter asburiae strain D2 has the ability to produce 1–aminocyclop ropane–1–carboxylate (ACC) deaminase, indole−3−acetic acid (IAA), and siderophore, as well as to solubilize phosphate. It may alleviate the impacts of saline–alkali stress on rice . Bacillus subtilis has been found to enhance crop saline–alkali stress tolerance by preventing excessive sodium accumulation and enhancing nutrient absorption. Application of the bacteria enhances the activities of peroxidase and catalase in leaves, thereby protecting plants from salt stress–induced damage . A combination of Enterobacter sp. Z1 and Klebsiella sp. Z2 can significantly improve soybean growth and nitrogen fixation by producing flavonoids, IAA, salicylic acid (SA), and taurine (51). Enterobacter aerogenes (LJL–5) and Pseudomonas aeruginosa (LJL–13) synergistically increased the biomass of alfalfa plants and improved phosphorus content and antioxidase activities (superoxide dismutase, peroxidase, and catalase) in saline–alkali conditions . A research has shown that the strains Stutzerimonas stutzeri A38 and Bacillus pumilus A49 have the ability to enhance root size in Medicago sativa and Medicago polymorpha plants when exposed to osmotic stress . In addition, PGPRs can also protect plants from heavy metal pollution by regulating the levels of plant endogenous SA, abscisic acid (ABA), and jasmonic acid (JA) . Bacillus sp. ZC3–2–1 can improve the phytoremediation efficiency of Cd–Zn–contaminated soil and maintain ion homeostasis by promoting the phytoextraction and immobilization of the metal ions . Thus, PGPRs are widely used as biofertilizers to improve crop biomass and yields, and as soil amendments to improve land availability . Due to its low carbon, environmental friendliness, and cost-effectiveness, PGPR has obtained increasing public acceptance . The diversity of microorganisms associated with plant roots with approximately tens of thousands of species, known as the plant’s second genome, plays vital roles in plant development, health, and environmental adaptation in their natural habitats . For example, the presence of Enterobacterium and Pseudomonas in soil indicates that exotic species highly rely on rhizosphere microorganisms to fulfill their nutritional needs . The important role of co–evolution and interaction between plants and microorganisms suggest that plants can shape their rhizosphere microbial communities, which proves that different plant species have specific microbial communities when grown in the same soil . This complex relationship goes beyond the rhizosphere and involves a diverse range of interactions that have a significant influence on plant vitality and the dynamics of the ecosystem . Furthermore, recent progress in high–throughput sequencing and omics technologies has given a deeper understanding of the functions and metabolic processes of rhizosphic microbiota , revealing the complex involvement of rhizosphic microorganisms in nutrient cycling, the formation of soil structure, and the regulation of plant signaling pathways . The acquisition of this knowledge has facilitated the development of novel approaches to exploiting plant–related advantageous characteristics of rhizosphic microorganisms . Furthermore, the recently developed field of microbiome engineering has shown the potential to intentionally alter the rhizosphere microbial populations to enhance plant production and adaptability to environmental changes . Various studies have been conducted to investigate the potential role of PGPR in alleviating the impacts of environmental stresses on plants while increasing biomass and yield . However, there is limited knowledge about the interactions between PGPR and plants under alkaline–sodic stress caused by carbonates. To clarify the mechanisms underlying PGPRs which improve the tolerance of plants to alkaline–sodic stress and the plants response to the PGPR, we isolated PGPRs from the rhizosphere microorganisms of local plants grown in Soda saline–alkali land; selected an effective strain, Bacillus altitudinis AD13−4; and explored the mechanisms underlying its growth–promoting function. Via biochemistry, molecular biology, and transcriptomics analysis, the strain AD13−4 was clarified to improve plant adaptation to alkaline–sodic environments by regulating plant metabolism, signal transduction, and plant–pathogen interaction, as well as by affecting the abundance and composition of the rhizosphere microbial community. Our study provides theoretical support for the optimization of saline–alkali–tolerant PGPR and valuable information for elucidating the alkaline–sodic tolerance mechanism of plants. 2.1. Bacillus Altitudinis AD13−4 Promoted Plant Growth and Development under Saline–Alkali Stress To illustrate the growth–promoting mechanisms of PGPR under alkaline–sodic stress, we screened PGPR using carbonate medium (1/2MS, pH 8.0, 1.5 mM NaHCO 3 ) to simulate an alkaline–sodic condition. Using Arabidopsis as the plant material, we obtained over 400 PGPR from the rhizosphere microorganisms of local plants growing in Soda saline–alkali land (Anda, Heilongjiang Province, China) and selected an effective strain, AD13−4, for mechanism investigation. As shown in A, under normal conditions (pH 5.8), Arabidopsis seedlings with and without inoculation of strain AD13−4 grew well, with no significance. Under alkaline–sodic conditions (pH 8.0 + 1.5 mM NaHCO 3 ), although Arabidopsis seeds germinated, the roots did not elongate and the cotyledons did not expand, indicating severe inhibition of seedling development. However, after inoculation with the strain AD13−4, the growth of the seedlings was restored, and the development of the aerial parts and roots were similar to those under normal conditions ( A,B). These results indicated that the strain AD13−4 effectively alleviated the growth inhibition of Arabidopsis seedlings under alkaline–sodic stress. To confirm the plant growth–promoting effects of strain AD13−4, we conducted soil culture experiments with maize, rice, and alfalfa (the optimal concentration of carbonate solution for maize is 80 mM; for rice, 50 mM; and for alfalfa, 40 mM, respectively). The results showed that under saline–alkali conditions, the fresh weight, dry weight, plant height, root length, and chlorophyll contents significantly decreased ( C,D; ), indicating inhibition of alkaline–sodic stress on photosynthesis and plant biomass. The activities of antioxidant enzymes (superoxide dismutase, peroxidase, and ascorbate peroxidase) significantly increased ( F; ), and consistent with this, the ROS-scavenging capacities significantly increased ( G), indicating an increase in ROS levels which was induced by saline–alkali stress. The malondialdehyde (MDA) content also significantly increased ( E), indicating that the increased ROS level caused oxidative damage to the membranes. After inoculation of the strain AD13−4, the fresh weight, dry weight, plant height, and chlorophyll contents of the plants significantly increased ( D,E), indicating an enhancement of photosynthesis and plant growth and development. The antioxidant enzyme activities and ROS-scavenging capacities also significantly increased, while the content of MDA was significantly reduced ( F,G), suggesting that the strain AD13−4 improved plant antioxidant capacities, which alleviated oxidative damage to the membranes. The contents of total proteins and sugars significantly increased, while that of proline significantly decreased ( E), suggesting that plant metabolism was regulated by strain AD13−4 to adapt to saline–alkali condition. All these results indicate that strain AD13−4 regulates plant antioxidant capacities and metabolic processes to adapt to saline–alkali conditions and improves plant growth and development. And the growth-promoting effects of strain AD13−4 are efficient and broad–spectrum. 2.2. Identification and Characteristics of Strain AD13−4 To identify the genus of strain AD13−4, the 16S rRNA fragment was amplified and sequenced. By blasting the NCBI database and phylogenetic tree analysis, strain AD13−4 was identified as a bacterium belonging to the genus Bacillus altitudinis , with 99.16% homology with the Bacillus altitudinis 41KF2b ( A). The AD13−4 colonies on nutrient agar were yellow and transparent, convex with a regular margin, and 2–3 mm in diameter after 14 h incubation. Strain AD13−4 utilized tartrate, Simmons’s citrate, and malonate as the sole carbon sources, respectively . In summary, strain AD13−4 is a novel bacterium belonging to the genus Bacillus altitudinis . As a PGPR that functions under saline–alkali conditions, strong salt and alkali tolerance is necessary. First, we checked pH tolerance of strain AD13−4. The OD 600 value of the bacterial culture with different pH showed that strain AD13−4 could not survive under the pH 3 condition, but could grow well under pH 4–9 conditions. Under the pH 10 condition, the concentration of bacteria culture after 20 h cultivation was significantly lower than those under pH 4–9 conditions, which was 52.5% of that under the pH 4 condition ( B). These results indicate that strain AD13−4 has a wide pH adaptation range. Next, the alkali tolerance of strain AD13−4 was checked. The concentrations of bacterial culture after 24 h cultivation with NaHCO 3 indicated no significant difference in the growth of strain AD13−4 under a 40 mM NaHCO 3 condition. As the concentration of NaHCO 3 increased, the proliferation ability of strain AD13−4 decreased, but it could still maintain a certain degree of growth and reproduction. Even when the NaHCO 3 concentration reached 200 mM, strain AD13−4 could still survive ( C), indicating that strain AD13−4 has a strong tolerance to alkali stress. Then, the salt tolerance of strain AD13−4 was checked. The concentrations of bacterial culture showed that strain AD13−4 could grow and proliferate in a high NaCl concentration of 1.6 M, but could not survive when the NaCl concentration reached a high concentration of 2 M ( D), indicating that strain AD13−4 has a strong tolerance to salt stress. To confirm the effects of secretions of strain AD13−4 on plants, Arabidopsis seeds were sown on the regular or carbonate medium containing cell−free fermentation broth of AD13−4. While, the seedlings didn’t show significance compared to those grown with inoculation of strain AD13−4. To investigate the growth−promoting mechanism of strain AD13−4, its secretions were detected. Firstly, the ability of strain AD13−4 to secrete acidic substances was tested. After inoculation of strain AD13−4 into LB medium with pH 8.1, the pH value rapidly decreased to 5.98 after 4.5 h cultivation ( E), indicating that strain AD13−4 has a strong ability to secrete acidic substances under alkali conditions, which could alleviate the alkalinity of the rhizosphere soil and facilitate root development. Further detection indicated that strain AD13−4 exhibits the activities of solubilizing phosphorus; fixing nitrogen; and producing siderophores, IAA, ACC deaminase, biofilms, and growth-promoting volatile substances ( F; ). These results indicate that strain AD13−4 can provide available phosphorus, iron ions, and nitrogen sources for plant growth; colonize on the root surface by forming biofilms; regulate plant auxin levels; and alleviate the inhibitory effects of ethylene on plant growth. 2.3. Strain AD13−4 Regulated Endogenous Phytohormone Levels and Cell Division Activity under Alkaline–Sodic Stress The phytohormone auxin plays a major role in the entire process of plant development and stress response. Auxin enhances plant development by regulating cell division, tissue expansion, stimulus response, etc. . The PIN−FORMED (PIN) auxin exporters redirect auxin fluxes in response to environmental stimuli via their dynamic polar subcellular localizations . Due to the alleviation of root development inhibition by strain AD13−4 under alkaline–sodic stress, we examined its effects on cell division activity using CYCB1;1::GUS , and on the auxin level using DR5::GUS reporter lines, respectively. CYCB1;1 is required for cell division in the M phase, and its expression is significantly induced in response to auxin . On the other hand, cytokinin and auxin jointly regulate plant development . The histochemical staining results indicated that under alkaline–sodic stress, the DR5::GUS signals increased markedly in the stele and columella cells ( A), while CYCB1;1::GUS signals decreased dramatically in the root apical meristem (RAM) ( B), suggesting that excessive accumulation of the auxin inhibited cell division and root elongation. After inoculation of strain AD13−4, the auxin level decreased significantly and cell division activity was restored, suggesting that the strain AD13−4 can regulate endogenous phytohormone levels to promote cell division and root elongation. To verify the effects of the strain AD13−4 on plant endogenous hormone levels, we observed the growth phenotypes of the pin mutants after inoculating strain AD13−4 under saline–alkali conditions. As shown in C,D, under normal conditions, the root lengths of pin1 , pin2 and pin7 were significantly shorter than those of the wild type, especially pin2 and pin7 . Under alkaline–sodic conditions (pH 8.0 + 1.5 mM NaHCO 3 ), the root lengths of Col-0, pin1 , and pin2 significantly decreased, while that of pin 7 did not show significant change and was longer than that of Col-0. After inoculation with strain AD13−4, the root lengths of Col-0 and pin mutants significantly increased. There was no significant difference between Col-0, pin1 , and pin7 , while root length of pin2 was significantly shorter than these lines. These results indicated that strain AD13−4 significantly affected auxin transport and distribution as well as cell division activity, thereby promoting plant development under saline–alkali stress. The different change patterns in the root length of the pin mutants after the treatments indicated that PIN1, PIN2, and PIN7 have different response mechanisms to alkaline–sodic stress and strain AD13−4. The outermost layer of cells at the tip of the root cap that are about to detach or have already detached from the root cap are called border cells. The differentiation of border cells is regulated by stem cell activity and auxin . Under alkaline–sodic conditions, the differentiation frequency of border cells with application of strain AD13−4 was significantly higher than that without AD13−4 . This result suggested that strain AD13−4 may modulate cell differentiation via affecting root stem cell activity and auxin levels. 2.4. Transcriptome Analysis of Alfalfa root Response to Strain AD13−4 under Alkaline–Sodic Conditions To elucidate the molecular mechanisms of plant response to strain AD13−4 under alkaline–sodic stress, we conducted transcriptome analysis using alfalfa roots treated with a 40 mM carbonate solution. The experimental materials were divided into three groups: CK (treated with water), SAS (saline–alkali stress, treated with carbonate solution), and SAS + AD13−4. A total of 599.97 Mb of clean reads were obtained, with a Q30 > 93.5% and GC content ranging from 41.34% to 42.08%. These results confirmed the high quality of the assembled transcripts. The principal component analysis (PCA) revealed significant differences among the three groups ( A). The PC1 score plots indicated good cohesion within each group, and the PC2 score plots indicated significant separation between the groups. A total of 6490 differentially expressed genes (DEGs) were identified using DESeq2 R package (v. 1.22.1). In the SAS_vs._CK group, there were 1256 DEGs upregulated and 2455 downregulated, while in AD13−4_vs._SAS group, there were 1710 DEGs upregulated and 2,486 downregulated DEGs ( B). Venn-diagram analysis indicated that there were 2779 and 2294 DEGs unique to the SAS_vs._CK and AD13−4_vs._SAS groups, respectively, while 1417 DEGs were common to the two groups ( C). The comparison of Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways between AD13−4_vs._SAS and SAS_vs._CK groups showed significant changes in many pathways under both AD13−4 and SAS conditions, including secondary metabolism, photosynthesis, signal transduction, plant–pathogen interactions, phenylpropanoid biosynthesis, etc. ( A,B). In addition, in the AD13−4_vs._SAS group, there were significant changes in metabolic pathways ( A, arrow). These results indicated that the above pathways responsive to alkaline–sodic stress were also regulated by strain AD13−4, thereby alleviating the impacts of alkaline–sodic stress on alfalfa. Gene ontology (GO) analysis showed that the DEGs were classified into thirty-two subcategories in three main categories: fifteen subcategories in Biological Processes (BP), two in Cellular Components (CC), and fifteen in Molecular Functions (MF). The DEG distribution patterns in AD13−4_vs._SAS and SAS_vs._CK groups were similar . A comparison of GO enrichment pathways between AD13−4_vs._SAS and SAS_vs._CK indicated significant enrichment in the terpenoid biosynthesis and metabolic pathways in both groups ( C,D, orange arrows), while the redox reaction pathways were specifically enriched in AD13−4_vs._SAS ( C, green arrows) and the photosynthesis pathways were specifically enriched in SAS_vs._CK ( D, blue arrows), indicating that these processes were important for plants to adapt to the alkaline–sodic environments, or for strain AD13−4 to alleviate alkaline–sodic stress. We also detected the contents of total phenols and total flavonoids. The results indicated that under alkaline–sodic stress, the total phenolic content significantly increased, but the total flavonoid content had no significance. After inoculation of strain AD13−4, both of them significantly increased ( E), indicating that strain AD13−4 can regulate secondary metabolic processes, thereby improving plant alkaline–sodic tolerance. Due to the significant difference in DEG enrichment between AD13−4_vs._SAS and SAS_vs._CK, we analyzed the expression of the common DEGs shown in the Venn plot. The results indicated that 114 of the common genes were downregulated under alkaline–sodic stress and upregulated after inoculation of AD13−4. Conversely, 327 of the common genes were upregulated under alkaline–sodic stress and downregulated after inoculation of strain AD13−4 , suggesting that these DEGs were not only specifically regulated by alkaline–sodic stress, but also by strain AD13−4 to improve plant adaptation to alkaline–sodic stress. 2.5. Analysis of Signaling Pathways Responsive to Strain AD13−4 under Alkaline–Sodic Stress We conducted an in−depth analysis of the signaling pathways of the AD13−4_vs._SAS group. The signal transduction pathways of alfalfa in response to AD13−4 under alkaline–sodic conditions include two pathways, the MAPK signaling pathway and plant hormone signal transduction. The MAPK signaling pathways included pathogen infection (flg22), pathogen attack (H 2 O 2 ), phytohormone (JA, ethylene, and ABA), and ROS–related pathways ( , green frame, part of the pathways; ). And the plant hormone signal transduction pathways included auxin, cytokinin, gibberellin, ABA, ethylene, JA, SA, and Brassinosteroid–related pathways ( , red frame, part of the pathways; ). These two processes had some overlapping parts, such as the ethylene, JA, and SA−related pathways ( B). Moreover, the flg22−induced pathways were also included in plant–pathogen interaction pathways ( A). The expression levels of many key genes in these pathways were altered significantly after inoculation of AD13−4, such as FLS2 , BAK1 , MPK1 / 2 , PR1 , MYC2 , SAUR , GELLA , PP2C , etc., indicating that strain AD13−4 activated the signaling pathways to enable plants to respond and adapt to alkaline–sodic stress. The RT−qPCR validation results of some of the genes were consistent with those of the transcriptome ( D). The above results indicate that the strain AD13−4 regulates plant tolerance to alkaline–sodic stress by affecting signal transduction pathways. In the stress response signaling pathways, transcription factors (TFs) serve as bridges, transmitting stimulus signals by binding to cis–regulatory elements in the promoters of the target genes. A total of 243 differentially expressed TFs were identified. The top five families were AR2/ERF, MYB, NAC, WRKY, and bHLH, which broadly respond to both biotic and abiotic stresses . Interestingly, compared to the seven bZIP and eight C2C2–Dof TFs identified in SAS_vs._CK group, only one bZIP and four C2C2–Dof TFs were identified in AD13−4_vs._SAS group. Moreover, the TF number in MYB and bHLH families in SAS_vs._CK group significantly decreased compared to those in SAS_vs._CK group . The bZIP, MYB, and bHLH family TFs are frequently involved in plant stress responses, and the Dof family is also reported to be involved in saline–alkali stress response in rice . The decrease in the number of these TFs in AD13−4_vs._SAS group reflected that the strain AD13−4 alleviated the impacts of alkaline–sodic stress on plants. 2.6. Analysis of Metabolic Pathways Responded to Alkaline–Sodic Stress In plants, all terpenoids are derived from isopentenyl diphosphate (IPP) and its enzymatically interconvertible isomer dimethylallyl diphosphate (DMAPP), which are generated from the mevalonate (MVA) and 2−C−methyl−D−erythritol−4−phosphate (MEP/DXP) pathways, respectively ( A). In the MVA pathway, most of the synthetase genes were downregulated, suggesting a possible reduction in IPP contents. In addition, downregulation of IDI , the IPP–DMAPP converting enzyme gene, may also lead to a decrease in IPP and DMAPP production. In sesquiterpenoid, triterpenoid, and diterpenoid biosynthesis pathways, the expression levels of many synthase genes had significant changes ( B). The RT−qPCR results of some of the genes were consistent with those of the transcriptome ( C), suggesting potential changes in the production of terpenoids. To verify the speculation, we determined the contents of total terpenoids. Under alkaline–sodic conditions, the total terpenoids significantly increased in the roots, but sharply decreased in the leaves, while after inoculation of strain AD13−4, the total terpenoids significantly increased in both the roots and leaves, especially in the leaves, and the total terpenoid contents increased by over three times compared to that before inoculation ( D). These results indicate that the strain AD13−4 regulated the plant secondary metabolism to enable plants to adapt to alkaline–sodic soil. 2.7. Impacts of Strain AD13−4 on Rhizosphere Bacterial Community Since PGPR can promote soil metabolism, which is mainly related to the soil microbial community , we conducted 16S rRNA gene sequencing of the alfalfa rhizosphere microbiome. The scatter diagrams of PCA with the first (PC1) and the second component (PC2) indicated that the cumulative contribution rates of CK, SAS, and AD13−4 groups were 46.18%, 26.88%, and 63.56%, respectively ( A). The PC1 score plots indicated the stability and repeatability of the results within each group, and the PC2 score plots indicated a significant difference between the groups ( A). Analysis of the 16S rRNA gene sequencing results yielded a total of 584,753 optimized sequences and 246,817,741 bases. The similarity of the operational taxonomic unit (OTU) was 97% with a classification confidence of 70% in the optimized reads (with a length of ≥ 400 bp), and the coverage rate of all samples was above 97%, indicating that the sequencing results were reliable. The inoculation of strain AD13–4 did not significantly affect the alpha diversity of bacteria in alkaline–sodic soil . The dominant bacterial phyla were Bacteroidota , Proteobacteria , Firmicutes , Bdellovibrionota , and Verrucomicrobiota . And the community barplot analysis indicated a significant alteration in the composition and proportion of bacterial families in the rhizosphere soil ( B). The most abundant bacteria were Flavobacteraceae , Pseudomonadaceae , Sphingobacteriaceae , and Chitinophagaceae . After application of strain AD13−4, the proportions of Flavobacteraceae , Chitinophagaceae , Caulobacteraceae , etc., increased; while those of Pseudomonadaceae , Rhodanobacteraceae , Bdellovibrioneceae , etc., decreased; and 37–13 and Weeksellaceae almost disappeared ( B). These results indicated that strain AD13−4 had impacts on abundance and composition of rhizosphere microbiota in alkaline–sodic soil. The cladogram, which explains the evolutionary relationships and biodiversity between species, indicated that the rhizosphere microbiota among AD13−4, SAS, and CK groups had significant differences . Lefse analysis of biomarkers showed that Feruginibacter , Polaromonas , and Nubsella were the biomarkers in strain AD13−4−treated alkaline–sodic soil rather than in alkaline–sodic soil and CK soil , suggesting that these bacteria were recruited to the alfalfa rhizosphere by strain AD13−4. The activity of soil enzymes, a key indicator of soil fertility and abiotic stress, is significantly affected by the soil microbial community . The determination of alfalfa rhizosphere soil properties showed that the electrical conductivity increased significantly, and the activities of urease and sucrase were seriously inhibited under alkaline–sodic stress. However, after inoculation of strain AD13−4, the electrical conductivity significantly decreased, and the activities of urease and sucrase were restored ( C), indicating that strain AD13−4 can improve the properties and activities of alkaline–sodic soil. To illustrate the growth–promoting mechanisms of PGPR under alkaline–sodic stress, we screened PGPR using carbonate medium (1/2MS, pH 8.0, 1.5 mM NaHCO 3 ) to simulate an alkaline–sodic condition. Using Arabidopsis as the plant material, we obtained over 400 PGPR from the rhizosphere microorganisms of local plants growing in Soda saline–alkali land (Anda, Heilongjiang Province, China) and selected an effective strain, AD13−4, for mechanism investigation. As shown in A, under normal conditions (pH 5.8), Arabidopsis seedlings with and without inoculation of strain AD13−4 grew well, with no significance. Under alkaline–sodic conditions (pH 8.0 + 1.5 mM NaHCO 3 ), although Arabidopsis seeds germinated, the roots did not elongate and the cotyledons did not expand, indicating severe inhibition of seedling development. However, after inoculation with the strain AD13−4, the growth of the seedlings was restored, and the development of the aerial parts and roots were similar to those under normal conditions ( A,B). These results indicated that the strain AD13−4 effectively alleviated the growth inhibition of Arabidopsis seedlings under alkaline–sodic stress. To confirm the plant growth–promoting effects of strain AD13−4, we conducted soil culture experiments with maize, rice, and alfalfa (the optimal concentration of carbonate solution for maize is 80 mM; for rice, 50 mM; and for alfalfa, 40 mM, respectively). The results showed that under saline–alkali conditions, the fresh weight, dry weight, plant height, root length, and chlorophyll contents significantly decreased ( C,D; ), indicating inhibition of alkaline–sodic stress on photosynthesis and plant biomass. The activities of antioxidant enzymes (superoxide dismutase, peroxidase, and ascorbate peroxidase) significantly increased ( F; ), and consistent with this, the ROS-scavenging capacities significantly increased ( G), indicating an increase in ROS levels which was induced by saline–alkali stress. The malondialdehyde (MDA) content also significantly increased ( E), indicating that the increased ROS level caused oxidative damage to the membranes. After inoculation of the strain AD13−4, the fresh weight, dry weight, plant height, and chlorophyll contents of the plants significantly increased ( D,E), indicating an enhancement of photosynthesis and plant growth and development. The antioxidant enzyme activities and ROS-scavenging capacities also significantly increased, while the content of MDA was significantly reduced ( F,G), suggesting that the strain AD13−4 improved plant antioxidant capacities, which alleviated oxidative damage to the membranes. The contents of total proteins and sugars significantly increased, while that of proline significantly decreased ( E), suggesting that plant metabolism was regulated by strain AD13−4 to adapt to saline–alkali condition. All these results indicate that strain AD13−4 regulates plant antioxidant capacities and metabolic processes to adapt to saline–alkali conditions and improves plant growth and development. And the growth-promoting effects of strain AD13−4 are efficient and broad–spectrum. To identify the genus of strain AD13−4, the 16S rRNA fragment was amplified and sequenced. By blasting the NCBI database and phylogenetic tree analysis, strain AD13−4 was identified as a bacterium belonging to the genus Bacillus altitudinis , with 99.16% homology with the Bacillus altitudinis 41KF2b ( A). The AD13−4 colonies on nutrient agar were yellow and transparent, convex with a regular margin, and 2–3 mm in diameter after 14 h incubation. Strain AD13−4 utilized tartrate, Simmons’s citrate, and malonate as the sole carbon sources, respectively . In summary, strain AD13−4 is a novel bacterium belonging to the genus Bacillus altitudinis . As a PGPR that functions under saline–alkali conditions, strong salt and alkali tolerance is necessary. First, we checked pH tolerance of strain AD13−4. The OD 600 value of the bacterial culture with different pH showed that strain AD13−4 could not survive under the pH 3 condition, but could grow well under pH 4–9 conditions. Under the pH 10 condition, the concentration of bacteria culture after 20 h cultivation was significantly lower than those under pH 4–9 conditions, which was 52.5% of that under the pH 4 condition ( B). These results indicate that strain AD13−4 has a wide pH adaptation range. Next, the alkali tolerance of strain AD13−4 was checked. The concentrations of bacterial culture after 24 h cultivation with NaHCO 3 indicated no significant difference in the growth of strain AD13−4 under a 40 mM NaHCO 3 condition. As the concentration of NaHCO 3 increased, the proliferation ability of strain AD13−4 decreased, but it could still maintain a certain degree of growth and reproduction. Even when the NaHCO 3 concentration reached 200 mM, strain AD13−4 could still survive ( C), indicating that strain AD13−4 has a strong tolerance to alkali stress. Then, the salt tolerance of strain AD13−4 was checked. The concentrations of bacterial culture showed that strain AD13−4 could grow and proliferate in a high NaCl concentration of 1.6 M, but could not survive when the NaCl concentration reached a high concentration of 2 M ( D), indicating that strain AD13−4 has a strong tolerance to salt stress. To confirm the effects of secretions of strain AD13−4 on plants, Arabidopsis seeds were sown on the regular or carbonate medium containing cell−free fermentation broth of AD13−4. While, the seedlings didn’t show significance compared to those grown with inoculation of strain AD13−4. To investigate the growth−promoting mechanism of strain AD13−4, its secretions were detected. Firstly, the ability of strain AD13−4 to secrete acidic substances was tested. After inoculation of strain AD13−4 into LB medium with pH 8.1, the pH value rapidly decreased to 5.98 after 4.5 h cultivation ( E), indicating that strain AD13−4 has a strong ability to secrete acidic substances under alkali conditions, which could alleviate the alkalinity of the rhizosphere soil and facilitate root development. Further detection indicated that strain AD13−4 exhibits the activities of solubilizing phosphorus; fixing nitrogen; and producing siderophores, IAA, ACC deaminase, biofilms, and growth-promoting volatile substances ( F; ). These results indicate that strain AD13−4 can provide available phosphorus, iron ions, and nitrogen sources for plant growth; colonize on the root surface by forming biofilms; regulate plant auxin levels; and alleviate the inhibitory effects of ethylene on plant growth. The phytohormone auxin plays a major role in the entire process of plant development and stress response. Auxin enhances plant development by regulating cell division, tissue expansion, stimulus response, etc. . The PIN−FORMED (PIN) auxin exporters redirect auxin fluxes in response to environmental stimuli via their dynamic polar subcellular localizations . Due to the alleviation of root development inhibition by strain AD13−4 under alkaline–sodic stress, we examined its effects on cell division activity using CYCB1;1::GUS , and on the auxin level using DR5::GUS reporter lines, respectively. CYCB1;1 is required for cell division in the M phase, and its expression is significantly induced in response to auxin . On the other hand, cytokinin and auxin jointly regulate plant development . The histochemical staining results indicated that under alkaline–sodic stress, the DR5::GUS signals increased markedly in the stele and columella cells ( A), while CYCB1;1::GUS signals decreased dramatically in the root apical meristem (RAM) ( B), suggesting that excessive accumulation of the auxin inhibited cell division and root elongation. After inoculation of strain AD13−4, the auxin level decreased significantly and cell division activity was restored, suggesting that the strain AD13−4 can regulate endogenous phytohormone levels to promote cell division and root elongation. To verify the effects of the strain AD13−4 on plant endogenous hormone levels, we observed the growth phenotypes of the pin mutants after inoculating strain AD13−4 under saline–alkali conditions. As shown in C,D, under normal conditions, the root lengths of pin1 , pin2 and pin7 were significantly shorter than those of the wild type, especially pin2 and pin7 . Under alkaline–sodic conditions (pH 8.0 + 1.5 mM NaHCO 3 ), the root lengths of Col-0, pin1 , and pin2 significantly decreased, while that of pin 7 did not show significant change and was longer than that of Col-0. After inoculation with strain AD13−4, the root lengths of Col-0 and pin mutants significantly increased. There was no significant difference between Col-0, pin1 , and pin7 , while root length of pin2 was significantly shorter than these lines. These results indicated that strain AD13−4 significantly affected auxin transport and distribution as well as cell division activity, thereby promoting plant development under saline–alkali stress. The different change patterns in the root length of the pin mutants after the treatments indicated that PIN1, PIN2, and PIN7 have different response mechanisms to alkaline–sodic stress and strain AD13−4. The outermost layer of cells at the tip of the root cap that are about to detach or have already detached from the root cap are called border cells. The differentiation of border cells is regulated by stem cell activity and auxin . Under alkaline–sodic conditions, the differentiation frequency of border cells with application of strain AD13−4 was significantly higher than that without AD13−4 . This result suggested that strain AD13−4 may modulate cell differentiation via affecting root stem cell activity and auxin levels. To elucidate the molecular mechanisms of plant response to strain AD13−4 under alkaline–sodic stress, we conducted transcriptome analysis using alfalfa roots treated with a 40 mM carbonate solution. The experimental materials were divided into three groups: CK (treated with water), SAS (saline–alkali stress, treated with carbonate solution), and SAS + AD13−4. A total of 599.97 Mb of clean reads were obtained, with a Q30 > 93.5% and GC content ranging from 41.34% to 42.08%. These results confirmed the high quality of the assembled transcripts. The principal component analysis (PCA) revealed significant differences among the three groups ( A). The PC1 score plots indicated good cohesion within each group, and the PC2 score plots indicated significant separation between the groups. A total of 6490 differentially expressed genes (DEGs) were identified using DESeq2 R package (v. 1.22.1). In the SAS_vs._CK group, there were 1256 DEGs upregulated and 2455 downregulated, while in AD13−4_vs._SAS group, there were 1710 DEGs upregulated and 2,486 downregulated DEGs ( B). Venn-diagram analysis indicated that there were 2779 and 2294 DEGs unique to the SAS_vs._CK and AD13−4_vs._SAS groups, respectively, while 1417 DEGs were common to the two groups ( C). The comparison of Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways between AD13−4_vs._SAS and SAS_vs._CK groups showed significant changes in many pathways under both AD13−4 and SAS conditions, including secondary metabolism, photosynthesis, signal transduction, plant–pathogen interactions, phenylpropanoid biosynthesis, etc. ( A,B). In addition, in the AD13−4_vs._SAS group, there were significant changes in metabolic pathways ( A, arrow). These results indicated that the above pathways responsive to alkaline–sodic stress were also regulated by strain AD13−4, thereby alleviating the impacts of alkaline–sodic stress on alfalfa. Gene ontology (GO) analysis showed that the DEGs were classified into thirty-two subcategories in three main categories: fifteen subcategories in Biological Processes (BP), two in Cellular Components (CC), and fifteen in Molecular Functions (MF). The DEG distribution patterns in AD13−4_vs._SAS and SAS_vs._CK groups were similar . A comparison of GO enrichment pathways between AD13−4_vs._SAS and SAS_vs._CK indicated significant enrichment in the terpenoid biosynthesis and metabolic pathways in both groups ( C,D, orange arrows), while the redox reaction pathways were specifically enriched in AD13−4_vs._SAS ( C, green arrows) and the photosynthesis pathways were specifically enriched in SAS_vs._CK ( D, blue arrows), indicating that these processes were important for plants to adapt to the alkaline–sodic environments, or for strain AD13−4 to alleviate alkaline–sodic stress. We also detected the contents of total phenols and total flavonoids. The results indicated that under alkaline–sodic stress, the total phenolic content significantly increased, but the total flavonoid content had no significance. After inoculation of strain AD13−4, both of them significantly increased ( E), indicating that strain AD13−4 can regulate secondary metabolic processes, thereby improving plant alkaline–sodic tolerance. Due to the significant difference in DEG enrichment between AD13−4_vs._SAS and SAS_vs._CK, we analyzed the expression of the common DEGs shown in the Venn plot. The results indicated that 114 of the common genes were downregulated under alkaline–sodic stress and upregulated after inoculation of AD13−4. Conversely, 327 of the common genes were upregulated under alkaline–sodic stress and downregulated after inoculation of strain AD13−4 , suggesting that these DEGs were not only specifically regulated by alkaline–sodic stress, but also by strain AD13−4 to improve plant adaptation to alkaline–sodic stress. We conducted an in−depth analysis of the signaling pathways of the AD13−4_vs._SAS group. The signal transduction pathways of alfalfa in response to AD13−4 under alkaline–sodic conditions include two pathways, the MAPK signaling pathway and plant hormone signal transduction. The MAPK signaling pathways included pathogen infection (flg22), pathogen attack (H 2 O 2 ), phytohormone (JA, ethylene, and ABA), and ROS–related pathways ( , green frame, part of the pathways; ). And the plant hormone signal transduction pathways included auxin, cytokinin, gibberellin, ABA, ethylene, JA, SA, and Brassinosteroid–related pathways ( , red frame, part of the pathways; ). These two processes had some overlapping parts, such as the ethylene, JA, and SA−related pathways ( B). Moreover, the flg22−induced pathways were also included in plant–pathogen interaction pathways ( A). The expression levels of many key genes in these pathways were altered significantly after inoculation of AD13−4, such as FLS2 , BAK1 , MPK1 / 2 , PR1 , MYC2 , SAUR , GELLA , PP2C , etc., indicating that strain AD13−4 activated the signaling pathways to enable plants to respond and adapt to alkaline–sodic stress. The RT−qPCR validation results of some of the genes were consistent with those of the transcriptome ( D). The above results indicate that the strain AD13−4 regulates plant tolerance to alkaline–sodic stress by affecting signal transduction pathways. In the stress response signaling pathways, transcription factors (TFs) serve as bridges, transmitting stimulus signals by binding to cis–regulatory elements in the promoters of the target genes. A total of 243 differentially expressed TFs were identified. The top five families were AR2/ERF, MYB, NAC, WRKY, and bHLH, which broadly respond to both biotic and abiotic stresses . Interestingly, compared to the seven bZIP and eight C2C2–Dof TFs identified in SAS_vs._CK group, only one bZIP and four C2C2–Dof TFs were identified in AD13−4_vs._SAS group. Moreover, the TF number in MYB and bHLH families in SAS_vs._CK group significantly decreased compared to those in SAS_vs._CK group . The bZIP, MYB, and bHLH family TFs are frequently involved in plant stress responses, and the Dof family is also reported to be involved in saline–alkali stress response in rice . The decrease in the number of these TFs in AD13−4_vs._SAS group reflected that the strain AD13−4 alleviated the impacts of alkaline–sodic stress on plants. In plants, all terpenoids are derived from isopentenyl diphosphate (IPP) and its enzymatically interconvertible isomer dimethylallyl diphosphate (DMAPP), which are generated from the mevalonate (MVA) and 2−C−methyl−D−erythritol−4−phosphate (MEP/DXP) pathways, respectively ( A). In the MVA pathway, most of the synthetase genes were downregulated, suggesting a possible reduction in IPP contents. In addition, downregulation of IDI , the IPP–DMAPP converting enzyme gene, may also lead to a decrease in IPP and DMAPP production. In sesquiterpenoid, triterpenoid, and diterpenoid biosynthesis pathways, the expression levels of many synthase genes had significant changes ( B). The RT−qPCR results of some of the genes were consistent with those of the transcriptome ( C), suggesting potential changes in the production of terpenoids. To verify the speculation, we determined the contents of total terpenoids. Under alkaline–sodic conditions, the total terpenoids significantly increased in the roots, but sharply decreased in the leaves, while after inoculation of strain AD13−4, the total terpenoids significantly increased in both the roots and leaves, especially in the leaves, and the total terpenoid contents increased by over three times compared to that before inoculation ( D). These results indicate that the strain AD13−4 regulated the plant secondary metabolism to enable plants to adapt to alkaline–sodic soil. Since PGPR can promote soil metabolism, which is mainly related to the soil microbial community , we conducted 16S rRNA gene sequencing of the alfalfa rhizosphere microbiome. The scatter diagrams of PCA with the first (PC1) and the second component (PC2) indicated that the cumulative contribution rates of CK, SAS, and AD13−4 groups were 46.18%, 26.88%, and 63.56%, respectively ( A). The PC1 score plots indicated the stability and repeatability of the results within each group, and the PC2 score plots indicated a significant difference between the groups ( A). Analysis of the 16S rRNA gene sequencing results yielded a total of 584,753 optimized sequences and 246,817,741 bases. The similarity of the operational taxonomic unit (OTU) was 97% with a classification confidence of 70% in the optimized reads (with a length of ≥ 400 bp), and the coverage rate of all samples was above 97%, indicating that the sequencing results were reliable. The inoculation of strain AD13–4 did not significantly affect the alpha diversity of bacteria in alkaline–sodic soil . The dominant bacterial phyla were Bacteroidota , Proteobacteria , Firmicutes , Bdellovibrionota , and Verrucomicrobiota . And the community barplot analysis indicated a significant alteration in the composition and proportion of bacterial families in the rhizosphere soil ( B). The most abundant bacteria were Flavobacteraceae , Pseudomonadaceae , Sphingobacteriaceae , and Chitinophagaceae . After application of strain AD13−4, the proportions of Flavobacteraceae , Chitinophagaceae , Caulobacteraceae , etc., increased; while those of Pseudomonadaceae , Rhodanobacteraceae , Bdellovibrioneceae , etc., decreased; and 37–13 and Weeksellaceae almost disappeared ( B). These results indicated that strain AD13−4 had impacts on abundance and composition of rhizosphere microbiota in alkaline–sodic soil. The cladogram, which explains the evolutionary relationships and biodiversity between species, indicated that the rhizosphere microbiota among AD13−4, SAS, and CK groups had significant differences . Lefse analysis of biomarkers showed that Feruginibacter , Polaromonas , and Nubsella were the biomarkers in strain AD13−4−treated alkaline–sodic soil rather than in alkaline–sodic soil and CK soil , suggesting that these bacteria were recruited to the alfalfa rhizosphere by strain AD13−4. The activity of soil enzymes, a key indicator of soil fertility and abiotic stress, is significantly affected by the soil microbial community . The determination of alfalfa rhizosphere soil properties showed that the electrical conductivity increased significantly, and the activities of urease and sucrase were seriously inhibited under alkaline–sodic stress. However, after inoculation of strain AD13−4, the electrical conductivity significantly decreased, and the activities of urease and sucrase were restored ( C), indicating that strain AD13−4 can improve the properties and activities of alkaline–sodic soil. The various secretions of strain AD13−4 affect the levels of endogenous phytohormones, which in turn affect plant metabolism; the cell division activity of the meristem; as well as cell differentiation, e.g., root cap and border cell. The root border cells protect continuously growing root tips by secreting compounds such as proteins, polysaccharides, phytoalexins, mucus, organic acids, etc. . The border cells are crucial for root growth and plant health, as they contribute to nutrient accumulation around the roots and may cause changes in the rhizosphere microbial population by improving the physical and chemical properties of rhizosphere soil and recruiting beneficial microorganisms, thereby affecting the diversity, abundance and composition of rhizosphere microbiota. The metabolites secreted by border cells also have defensive functions . For example, a certain flavonoid compound in alfalfa roots can inhibit the growth of trichomonas in soil . The detachment frequency of border cells is regulated by a complicated mechanism involving the expression of a series of TFs, quiescent center identity, stem cell activity, and auxin distribution and concentration . In our study, the frequency of boundary cell generation was significantly higher in the present of strain AD13−4 , suggesting that AD13−4 may accelerate border cell differentiation by affecting endogenous phytohormone levels. In turn, increased abundance of rhizospheric microbiota rewards vigorous plant growth and stress tolerance, indicating a mutually beneficial interaction between plants and PGPRs . The increase in the abundance of beneficial microorganisms in rhizosphere soil promotes carbon and nitrogen cycling in soil, which helps to improve the effectiveness of nutrients in the soil . The application of strain AD13−4 affected plant physiological processes, e.g., primary and secondary metabolism. In the secondary metabolic processes, the terpenoid and flavonoid biosynthesis pathways were altered significantly after the application of strain AD13−4. Terpenoids are the most abundant and structurally diverse metabolites in plants, and play a crucial role in plant growth and development and adaptation to environments . In plants, all terpenoids are derived from IPP and DMAPP, which are generated from the MVA and MEP/DXP pathways, respectively . The total terpenoid contents in alfalfa roots and leaves showed different change patterns under alkaline–sodic stress ( D). This phenomenon suggested a different response of roots and leaves to alkaline–sodic stress, or/and a long−distance transport of terpenoids from leaves to roots directly struggling with alkaline–sodic stress. After the application of strain AD13−4, the total terpenoid contents increased significantly in both roots and leaves ( D), indicating that strain AD13−4 activated the terpenoid biosynthesis pathways in both tissues to improve plant tolerance to stress. In the MVA pathway, 3–hydroxy–3–methylglutaryl–CoA reductase (HMGR) is the first rate–limiting enzyme, closely related to oxidative stress, proliferation, ER morphogenesis, and plant response to hormones . IPP is not only a precursor of terpenes, but is also used to produce brassinosteroids, cytokinins, and phytosterols, which are crucial for cell membrane fluidity and plant growth and development . After the application of strain AD13−4, the HMGR expression levels altered significantly, which may have affected IPP contents and, subsequently, antioxidant capacities, endogenous phytohormone levels, cell activities, etc., in plants. Geranylgeranyl diphosphate (GGPP) is a substrate not only for the synthesis of diterpenoids, but also for several important plant hormones, such as gibberellin, abscisic acid, and strigolactone. In addition, GGPP is a precursor of carotenoids and chlorophyll and an important joint in several important secondary metabolic pathways in plants . The changes in upstream and downstream gene expression levels ( B,C) may cause changes in GGPP contents, thereby affecting phytohormone levels and photosynthesis. All the changes suggest crucial roles of the strain AD13−4 in the regulation of plant growth and development and adaptation to environmental stresses. There was also a significant change in phenylpropanoid biosynthesis in KEGG enrichment analyses ( A,B). Phenylpropanoid metabolites mainly include flavonoid and lignin biosynthesis pathways. Salt stress induces the biosynthesis of flavonoid compounds, which in turn act as antioxidants to reduce salt stress–induced oxidative damage . Studies on tomato roots found a significant increase in phenylpropane synthesis genes and metabolites under salt stress . It was recently reported that, to adapt to high saline–alkali stress, rice leaves accumulate a large amount of lipids, organic acids, organic oxygen compounds, phenylpropanoids, and polyketides . In alfalfa roots, a large amount of DEGs were enriched in Phenylpropanoid biosynthesis pathways after application of strain AD13−4, and the increase in total flavonoid contents once again emphasized the importance of this pathway in plant adaptation to alkaline–sodic environment. The inoculation of Bacillus altitudinis AD13−4 significantly improved the physical and chemical properties and activities of soil, as well as rhizospheric microbiota composition and abundance, which might contribute to the alleviation of alkaline–sodic stress on plants. At the family level, the relative abundance of Flavobacteraceae and Chitinophagaceae , which belong to Bacteroidota , increased after inoculation of AD13−4. It has been reported that the bacteria in Bacteroidota might enhance soluble phosphorus in soil by secreting phosphorus−solubilizing enzymes. And Bacteroidota and Firmicutes can synergistically degrade rice straw in paddy fields . Furthermore, the relative abundance of some biomarkers, Caulobacteraceae , Feruginibacter , Polaromonas , and Nubsella, increased after applying strain AD13−4. Many strains of Caulobacteraceae, which belongs to Proteobacteria, have the ability to fix nitrogen and participate in the nitrification process to increase soil nutrients . Feruginibacter is often found in activated sludge for treating various types of wastewater, secrets a large amount of extracellular polymers, and is related to the formation of sludge flocs and biofilms . Polaromonas has been reported to potentially influence plant acclimation and resilience to cold stress . And Nubsella zeaxanthinifaciens gen. nov., sp. nov. belongs to the family Sphingobacteriaceae and produces Zeaxanthin, the major carotenoid pigment . The carotenoids act as a source of retrograde signals with impacts on plant development and stress responses . Flavobacteriaceae is considered as a key polysaccharide-degrading bacterium . The organic macromolecules in nature can be hydrolyzed by microorganisms, e.g., Flavobacteriaceae strain F89T , and used as a source of nutrition and energy, which are beneficial for plant growth and adaptation to the environment. Flavobacterium strain TRM1 can suppress R. solanacearum disease development in a susceptible plant, revealing its role in protecting plants from microbial pathogens . The beneficial bacteria (which could be called PGPR) promote nitrogen, carbon, and phosphorus cycling by increasing soil enzyme activities to increase soil nutrient contents and enhance plant resistance to biotic and abiotic stress. The increase of above bacteria suggest that strain AD13−4 plays a crucial role in recruiting beneficial bacteria to enhance soil activity, plant growth and development, as well as disease resistance. Biofilms facilitate the colonization of PGPRs on the surfaces of roots, thereby achieving a stable growth–promoting function . The strong ability of strain AD13−4 to form biofilms suggests its high affinity to plant roots, thus providing conditions for better functional utilization. Our reevaluation of the molecular mechanism of alfalfa root response to Bacillus altitudinis AD13−4 provided potential genetic targets for developing alkaline–sodic-tolerant plants. On the other hand, strain AD13−4 can survive well in alkaline–sodic soil, revealing its capacity to colonize and adapt to the interface between soil and plants, which provides a theoretical basis for its application to the improvement of alkaline–sodic land. 4.1. Plant Materials and Treatments Arabidopsis seeds (Col-0, DR5::GUS , CYCB1;1::GUS , and pin1/2/7 ) were surface−sterilized with 75% ( v / v ) ethanol for 5 min and then washed three times with sterile H 2 O. The sterilized seeds were sown on regular medium (1/2MS, pH 5.8) or carbonate medium (1/2MS, pH 8.0 + 1.5 mM NaHCO 3 ), which simulated alkaline–sodic conditions. Vertical cultivation was utilized at 22 °C, with 16 h light/8 h dark. To investigate the effects of the secretions of strain AD13−4 on plants, rather than the bacterium itself, Arabidopsis seedlings were photographed before their root tips came into contact with the bacterium. To further confirm the effects of the secretions of strain AD13−4 on plants, its cell−free fermentation broth was used for the germination of Arabidopsis seeds. The bacterium was cultured overnight in 1/2MS liquid medium, and the concentration was adjusted to OD 600 = 1, then centrifuged. The supernatant was filtered by a 0.22 μm filter, and then 200 μL of it was applied to the surface of 1/2MS medium or carbonate medium and air−dried. Then, the seeds were sown on the medium with/without the cell−free fermentation broth and vertically cultivated. For the soil culture, the carbonate solution containing Na 2 CO 3 :NaHCO 3 = 1:9 (molar ratio) was used to simulate the carbonate composition in Soda saline–alkali land in Northeast China. The treatment was applied once every seven days, a total of three times. Rice ( Oryza sativa ) seeds germinated at 28 °C for five days were sown in peat soil (PINDSTRUP SUBSTRATE, 5−20 mm, 330 L), with a total of 30 seedlings (three pots per treatment and 10 seedlings per pot). The control group (CK) was treated with water, the SAS group was treated with 50 mM carbonate solution, and the SAS + AD13−4 group was treated with carbonate solution and strain AD13−4. The growth conditions were 26 °C and 10 h light/14 h dark, with a humidity level of 50–70%. Maize seeds (B73, a commercial cultivar in Heilongjiang Province) were sown in peat soil, with a total of 15 seeds (three pots per treatment and 5 seeds per pot). The seeds were treated with water (CK), 80 mM carbonate solution (SAS), or carbonate solution and AD13−4 (SAS + AD13−4). The growth conditions were 22 °C and 16 h light/8 h dark, with a humidity level of 50–70%. Alfalfa ( Medicago sativa L.) seeds were sown in peat soil, with a total of 180 seeds (three pots per treatment and 60 seeds per pot). On the fifth day after germination, the seedlings were treated with water (CK), 40 mM carbonate solution (SAS), and carbonate solution and strain AD13−4 (SAS + AD13−4). The growth conditions were 22 °C and 16 h light/8 h dark, with a humidity level of 50−70%. For alkaline–sodic treatment, the rate of carbonate solution:soil ( v / v ) was about 2:5. Strain AD13−4 (OD 600 = 1) was inoculated to the soil [1:50 ( v / v )] during the first treatment. For the first treatment, a double volume of carbonate solution was used to thoroughly irrigate the soil. 4.2. Screening of PGPR and Molecular Identification of Strain AD13−4 For isolation of the rhizosphere microorganism, five grams of rhizosphere soil from native plants grown in Soda saline–alkali land in Northeast China (Anda, Heilongjiang Province, China) was added to 45 mL of sterile water, stirred for 15 min, and stewed for 10 min, then 1 mL of the supernatant was taken and added to 9 mL of sterile water and mixed well (the dilution was recorded as 10 −1 ). Then, 1 mL of it was taken, added to 9 mL of sterile water, and mixed well (the dilution was recorded as 10 −2 ), and so on to prepare bacterial suspensions with different dilutions of 10 −3 , 10 −4 , 10 −5 , 10 −6 , and 10 −7 . A sample of 0.1 mL of each dilution was evenly spread on LB solid medium and incubated at 30 °C for 2–3 d. Single colonies were selected to separate the bacteria by streaking more than 3 times to obtain a single strain. To screen PGPR, 1/2MS medium (pH 8.0) containing 1.5 mM NaHCO 3 was used to simulate the alkaline–sodic condition, and Arabidopsis Col-0 was used as the plant material. Strains that could significantly promote the growth of Arabidopsis seedlings were considered to be PGPR, including AD13−4. The 16S rRNA gene of AD13−4 was PCR−amplified with specific primers (listed in ), and the PCR products were determined by Sanger sequencing. The 16S rRNA sequence was uploaded (gene bank number OR863787, NCBI number SUB14003048) . Via blasting in NCBI databases (rRNA/ITS databases), AD13−4 was identified as a bacterium belonging to the genus Bacillus altitudinis . MEGA X was used to construct the phylogenetic tree. Neighbor−joining was used, with 1000 bootstrap replications. The physiological and biochemical identification of Bacillus altitudinis AD13−4 was conducted using Micro−biochemical identification tubes (HOPEBIO, Shanghai, China). 4.3. Physiological Determination and Histochemistry Assay A quantity of 0.2 g roots of one−month−old alfalfa plants were collected for physiological measurement. Briefly, guaiacol colorimetry was used for the determination of the POD activity, nitro blue tetrazolium photoreduction for SOD activity, and the hydrogen peroxide method for CAT activity, while APX activity was determined according to the reduction in the AsA content . The MDA content was determined using the thiobarbituric acid method, and the proline content using the acidic ninhydrin colorimetry method . Three biological replicates per sample were used. Total antioxidant capacity assay kits (Shanghai Enzyme-linked Biotechnology Co., Ltd., Shanghai, China) were used to gauge the radical scavenging capacities of DPPH (515 nm), ABTS (734 nm), and FRAP (593 nm). Three biological replicates per sample were used . For the GUS (β–glucuronidase) staining assay, the seedlings were incubated in a GUS staining buffer [80 mM sodium phosphate buffer (pH 7.0), 0.4 mM potassium ferricyanide, 0.4 mM potassium ferrocyanide, 8 mM EDTA, 0.05% Triton X−100, 0.8 mg/mL 5−bromo−4−chloro−3−indolyl−β−D−glucuronide] for 4 h at 37 °C . Three independent experiments per sample were used. 4.4. Determination of Total Phenols, Total Flavonoids, and Total Terpenoids The root material was the same as that used for physiological determination in 4.3. The procedures for the determination of total flavonoids and total phenolics referred to . For determination of total terpenoids, 10 mL of methanol was added to 0.1 g of powder of the tissues, ultrasonicated (80 kW) for 30 min, and left stewing overnight at 4 °C; then, the supernatant was used as a test solution. For standard solution preparation, 25, 50, 100, 150, 200, 300, and 400 μL of 1 mg/mL ursolic acid solution were evaporated dry in an 80 °C water bath, then 200 μL of 5% vanillin ice acetic acid solution and 400 μL of concentrated sulfuric acid were added and incubated in a 60 °C water bath for 15 min, and 5 mL of glacial acetic acid was added. The same volumes of standard solutions were added in the proper order to the test solutions. The absorbance value was determined at 543 nm. The test solutions without addition of ursolic acid were used as control. Three biological replicates per sample were used . 4.5. Determination of Salt, Alkali, and pH Tolerance of Strain AD13−4 To investigate the pH tolerance range of strain AD13−4, 1 mL of AD13−4 culture (OD 600 = 1) was added to 100 mL of LB liquid medium with different pH values (pH 3−10) and cultured at 30 °C for 20 h. The concentration of the bacterial culture was measured every 2 h and the growth curve was plotted. To investigate the alkali tolerance of strain AD13−4, 1 mL of AD13−4 culture (OD 600 = 1) was added to 100 mL LB liquid medium with different concentrations of NaHCO 3 (0−200 mM) and cultured at 30 °C for 24 h. The concentration of the bacterial culture was measured every 2 h, and the growth curve was plotted. To investigate the salt tolerance of strain AD13−4, 1 mL of AD13−4 culture (OD 600 = 1) was added to 100 mL LB liquid medium with different concentrations of NaCl (0−2 M) and cultured at 30 °C for 24 h. The concentration of the bacterial culture was measured at 12 h and 24 h, and the growth curve was plotted. To determine the H + secretion ability, 1 mL of AD13−4 culture (OD 600 = 1) was added to 100 mL LB liquid medium (pH 8.1) and cultured at 30 °C for 4.5 h. The pH value was measured every 1.5 h. 4.6. Determination of Characteristics of Strain AD13−4 To detect the nitrogen fixation capacity of strain AD13−4, Ashby’s Mannitol Agar medium (10 g/L mannitol, 0.2 g/L KH 2 PO 4 , 0.2 g/L MgSO 4 ·7H 2 O, 0.2 g/L NaCl, 0.1 g/L CaSO 4 ·2H 2 O, 5 g/L CaCO 3 ) was used. To detect the ability to produce ACC deaminase, DF and ADF medium (T10407, T10408, Saint–bio, Shanghai, China) were used. To detect the ability to produce IAA, the Salkowski colorimetric reaction was conducted. To detect the ability to produce siderophiles, MKB liquid medium (casamino acid 5.0 g, glycerol 15 mL, K 2 HPO 4 2.5 g, MgSO 4 ·7H 2 O 2.5 g, H 2 O 1000 mL, pH 7.2) was used. To detect the phosphate solubilization ability, the Molybdenum antimony colorimetric method was employed. To detect the ability to generate biofilms, the Giemsa staining method was used. 4.7. Detection of Rhizospheric Soil Enzymatic Activities After removing loose soil from the roots, a thin layer of soil attached to the alfalfa root surface was collected as rhizosphere soil. The 3,5−dinitrosalicylate colorimetric method was used to determine sucrase activity at 508 nm. Sodium phenolate was used to determine urease activity at 578 nm . The electrical conductivity (EC) of the saturated soil extract (soil:water = 1:5, w / v ) was determined using an EC meter (DDS–11A, Shang Hai Yoke Instrument Co., Ltd., Shanghai, China). The pH values were measured using the saturated soil extract with a pH meter . The rhizosphere soil was the same as that used for 16S rRNA gene sequencing. 4.8. Alfalfa RNA Isolation, Library Construction, RNA Sequencing, and RT−qPCR The roots used for RNA sequencing were the same as those used for physiological determination in 4.3. The total RNA was extracted using the TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer’s protocol. The procedures of library construction, RNA sequencing, and RT−qPCR referred to . For RT−qPCR, three independent experiments per sample and three replicates per experiment were used. The primer sequences are listed in . Raw transcriptome reads were deposited in the NCBI Sequence Read Archive (SRA) database (PRJNA1025112). 4.9. Preparation and 16S rRNA Gene Sequencing of Rhizospheric Microbiota The rhizosphere soils of one−month−old alfalfa plants were collected. The procedure of 16S rRNA gene sequencing referred to . Three replicates per sample were used. Briefly, microbial genomic DNA was extracted using the TruSeqTM DNA Sample Prep Kit (Illumina, San Diego, CA, USA), and the hypervariable region (V3–V4) of the bacterial 16S rRNA genes was amplified using the primers 338F and 806R in an GeneAmp ® 9700 PCR thermocycler (Applied Biosystems, CA, USA). The PCR products were purified using an AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA) and quantified using a Quantus™ Fluorometer (Promega, Beijing, China). Purified amplicons were pooled in equimolar amounts and paired−end−sequenced on an Illumina MiSeq PE300 platform (Illumina, San Diego, CA, USA) according to the standard protocols of Majorbio Bio−Pharm Technology Co., Ltd. (Shanghai, China). Raw reads were deposited in the NCBI Sequence Read Archive (SRA) database (PRJNA1028144). The phylogenetic tree was constructed by selecting the Bacillus strains in NCBI database with the highest identity of 16S rRNA sequences to those of AD13−4. MEGA X was used to construct the phylogenetic tree. 4.10. Bioinformatics Analysis For transcriptomic analysis, alfalfa RNA was extracted according to the protocol using CTAB-PBIOZOL reagent. After identifying and quantifying total RNA using a bioanalyzer (Thermo Fisher Scientific, MA, USA), the mRNA was purified using Oligo(dT) magnetic beads. The cDNA was generated by a reverse transcription kit and then amplified by PCR. The products were purified using Agencourt Ampure XP Beads (A63882, BECKMAN), and then dissolved with EB solution. The double-stranded PCR products were denatured and cycled using a cleaver oligonucleotide sequence. The single-stranded circular DNA (ssCir DNA) format was used for the final library. The library was sequenced on the Illumina HiSeq X Ten platform, and the raw data were filtered using fastp (v 0.23.2), mainly to remove reads with adapters. HISAT was used to locate the clean reads to the alfalfa genome (‘Zhongmu No.1’ alfalfa). FPKM was calculated based on the length of each gene and the number of reads mapped to the genes. The DESeq2 R package (v. 1.22.1, |log 2 foldchang| ≥ 1 and FDR < 0.05) was used to analyze the differential expression between each two groups, and the p value was corrected using the Benjamini and Hochberg method . The enrichment analysis was performed based on the hypergeometric test. For KEGG, the hypergeometric distribution test was performed with the unit of the pathway; for GO, it was performed based on the GO term . For rhizospheric microbiota analysis, paired–end reads from the original DNA fragments were merged using FLASH (v 1.2.11, default). Paired-end reads were assigned to each sample according to the unique barcodes. Sequence analysis was performed using the UPARSE software package (v 7.0.1090, http://drive5.com/uparse/ , accessed on 20 March 2024) with the UPARSE–OTU and UPARSE–OTUref algorithms. Sequences with ≥97% similarity were assigned to the same OTUs. Representative sequences were picked for OTUs, and the RDP classifier was used to annotate taxonomic information for each representative sequence. In order to compute alpha diversity, we rarified the OTU table and calculated three metrics: Chao1 estimated the species abundance; Observed Species estimated the amount of unique OTUs found in each sample; and the Shannon index was calculated. Cluster analysis was preceded by principal component analysis (PCA), which was applied to reduce the dimensions of the original variables using the QIIME software package (v 1.9.0, default). QIIME was used to calculate both the weighted and unweighted unifrac distance, which are phylogenetic measures of beta diversity. Unweighted unifrac distance was used for principal coordinate analysis (PCoA) and the unweighted pair group method with arithmetic mean (UPGMA) clustering. To confirm differences in the abundances of individual taxonomy between the two groups, Metastats software (v 1.0, default) was utilized. LEfSe was used for the quantitative analysis of biomarkers within different groups to provide biological class explanations to establish statistical significance, biological consistency, and effect–size estimation of the predicted biomarkers . Arabidopsis seeds (Col-0, DR5::GUS , CYCB1;1::GUS , and pin1/2/7 ) were surface−sterilized with 75% ( v / v ) ethanol for 5 min and then washed three times with sterile H 2 O. The sterilized seeds were sown on regular medium (1/2MS, pH 5.8) or carbonate medium (1/2MS, pH 8.0 + 1.5 mM NaHCO 3 ), which simulated alkaline–sodic conditions. Vertical cultivation was utilized at 22 °C, with 16 h light/8 h dark. To investigate the effects of the secretions of strain AD13−4 on plants, rather than the bacterium itself, Arabidopsis seedlings were photographed before their root tips came into contact with the bacterium. To further confirm the effects of the secretions of strain AD13−4 on plants, its cell−free fermentation broth was used for the germination of Arabidopsis seeds. The bacterium was cultured overnight in 1/2MS liquid medium, and the concentration was adjusted to OD 600 = 1, then centrifuged. The supernatant was filtered by a 0.22 μm filter, and then 200 μL of it was applied to the surface of 1/2MS medium or carbonate medium and air−dried. Then, the seeds were sown on the medium with/without the cell−free fermentation broth and vertically cultivated. For the soil culture, the carbonate solution containing Na 2 CO 3 :NaHCO 3 = 1:9 (molar ratio) was used to simulate the carbonate composition in Soda saline–alkali land in Northeast China. The treatment was applied once every seven days, a total of three times. Rice ( Oryza sativa ) seeds germinated at 28 °C for five days were sown in peat soil (PINDSTRUP SUBSTRATE, 5−20 mm, 330 L), with a total of 30 seedlings (three pots per treatment and 10 seedlings per pot). The control group (CK) was treated with water, the SAS group was treated with 50 mM carbonate solution, and the SAS + AD13−4 group was treated with carbonate solution and strain AD13−4. The growth conditions were 26 °C and 10 h light/14 h dark, with a humidity level of 50–70%. Maize seeds (B73, a commercial cultivar in Heilongjiang Province) were sown in peat soil, with a total of 15 seeds (three pots per treatment and 5 seeds per pot). The seeds were treated with water (CK), 80 mM carbonate solution (SAS), or carbonate solution and AD13−4 (SAS + AD13−4). The growth conditions were 22 °C and 16 h light/8 h dark, with a humidity level of 50–70%. Alfalfa ( Medicago sativa L.) seeds were sown in peat soil, with a total of 180 seeds (three pots per treatment and 60 seeds per pot). On the fifth day after germination, the seedlings were treated with water (CK), 40 mM carbonate solution (SAS), and carbonate solution and strain AD13−4 (SAS + AD13−4). The growth conditions were 22 °C and 16 h light/8 h dark, with a humidity level of 50−70%. For alkaline–sodic treatment, the rate of carbonate solution:soil ( v / v ) was about 2:5. Strain AD13−4 (OD 600 = 1) was inoculated to the soil [1:50 ( v / v )] during the first treatment. For the first treatment, a double volume of carbonate solution was used to thoroughly irrigate the soil. For isolation of the rhizosphere microorganism, five grams of rhizosphere soil from native plants grown in Soda saline–alkali land in Northeast China (Anda, Heilongjiang Province, China) was added to 45 mL of sterile water, stirred for 15 min, and stewed for 10 min, then 1 mL of the supernatant was taken and added to 9 mL of sterile water and mixed well (the dilution was recorded as 10 −1 ). Then, 1 mL of it was taken, added to 9 mL of sterile water, and mixed well (the dilution was recorded as 10 −2 ), and so on to prepare bacterial suspensions with different dilutions of 10 −3 , 10 −4 , 10 −5 , 10 −6 , and 10 −7 . A sample of 0.1 mL of each dilution was evenly spread on LB solid medium and incubated at 30 °C for 2–3 d. Single colonies were selected to separate the bacteria by streaking more than 3 times to obtain a single strain. To screen PGPR, 1/2MS medium (pH 8.0) containing 1.5 mM NaHCO 3 was used to simulate the alkaline–sodic condition, and Arabidopsis Col-0 was used as the plant material. Strains that could significantly promote the growth of Arabidopsis seedlings were considered to be PGPR, including AD13−4. The 16S rRNA gene of AD13−4 was PCR−amplified with specific primers (listed in ), and the PCR products were determined by Sanger sequencing. The 16S rRNA sequence was uploaded (gene bank number OR863787, NCBI number SUB14003048) . Via blasting in NCBI databases (rRNA/ITS databases), AD13−4 was identified as a bacterium belonging to the genus Bacillus altitudinis . MEGA X was used to construct the phylogenetic tree. Neighbor−joining was used, with 1000 bootstrap replications. The physiological and biochemical identification of Bacillus altitudinis AD13−4 was conducted using Micro−biochemical identification tubes (HOPEBIO, Shanghai, China). A quantity of 0.2 g roots of one−month−old alfalfa plants were collected for physiological measurement. Briefly, guaiacol colorimetry was used for the determination of the POD activity, nitro blue tetrazolium photoreduction for SOD activity, and the hydrogen peroxide method for CAT activity, while APX activity was determined according to the reduction in the AsA content . The MDA content was determined using the thiobarbituric acid method, and the proline content using the acidic ninhydrin colorimetry method . Three biological replicates per sample were used. Total antioxidant capacity assay kits (Shanghai Enzyme-linked Biotechnology Co., Ltd., Shanghai, China) were used to gauge the radical scavenging capacities of DPPH (515 nm), ABTS (734 nm), and FRAP (593 nm). Three biological replicates per sample were used . For the GUS (β–glucuronidase) staining assay, the seedlings were incubated in a GUS staining buffer [80 mM sodium phosphate buffer (pH 7.0), 0.4 mM potassium ferricyanide, 0.4 mM potassium ferrocyanide, 8 mM EDTA, 0.05% Triton X−100, 0.8 mg/mL 5−bromo−4−chloro−3−indolyl−β−D−glucuronide] for 4 h at 37 °C . Three independent experiments per sample were used. The root material was the same as that used for physiological determination in 4.3. The procedures for the determination of total flavonoids and total phenolics referred to . For determination of total terpenoids, 10 mL of methanol was added to 0.1 g of powder of the tissues, ultrasonicated (80 kW) for 30 min, and left stewing overnight at 4 °C; then, the supernatant was used as a test solution. For standard solution preparation, 25, 50, 100, 150, 200, 300, and 400 μL of 1 mg/mL ursolic acid solution were evaporated dry in an 80 °C water bath, then 200 μL of 5% vanillin ice acetic acid solution and 400 μL of concentrated sulfuric acid were added and incubated in a 60 °C water bath for 15 min, and 5 mL of glacial acetic acid was added. The same volumes of standard solutions were added in the proper order to the test solutions. The absorbance value was determined at 543 nm. The test solutions without addition of ursolic acid were used as control. Three biological replicates per sample were used . To investigate the pH tolerance range of strain AD13−4, 1 mL of AD13−4 culture (OD 600 = 1) was added to 100 mL of LB liquid medium with different pH values (pH 3−10) and cultured at 30 °C for 20 h. The concentration of the bacterial culture was measured every 2 h and the growth curve was plotted. To investigate the alkali tolerance of strain AD13−4, 1 mL of AD13−4 culture (OD 600 = 1) was added to 100 mL LB liquid medium with different concentrations of NaHCO 3 (0−200 mM) and cultured at 30 °C for 24 h. The concentration of the bacterial culture was measured every 2 h, and the growth curve was plotted. To investigate the salt tolerance of strain AD13−4, 1 mL of AD13−4 culture (OD 600 = 1) was added to 100 mL LB liquid medium with different concentrations of NaCl (0−2 M) and cultured at 30 °C for 24 h. The concentration of the bacterial culture was measured at 12 h and 24 h, and the growth curve was plotted. To determine the H + secretion ability, 1 mL of AD13−4 culture (OD 600 = 1) was added to 100 mL LB liquid medium (pH 8.1) and cultured at 30 °C for 4.5 h. The pH value was measured every 1.5 h. To detect the nitrogen fixation capacity of strain AD13−4, Ashby’s Mannitol Agar medium (10 g/L mannitol, 0.2 g/L KH 2 PO 4 , 0.2 g/L MgSO 4 ·7H 2 O, 0.2 g/L NaCl, 0.1 g/L CaSO 4 ·2H 2 O, 5 g/L CaCO 3 ) was used. To detect the ability to produce ACC deaminase, DF and ADF medium (T10407, T10408, Saint–bio, Shanghai, China) were used. To detect the ability to produce IAA, the Salkowski colorimetric reaction was conducted. To detect the ability to produce siderophiles, MKB liquid medium (casamino acid 5.0 g, glycerol 15 mL, K 2 HPO 4 2.5 g, MgSO 4 ·7H 2 O 2.5 g, H 2 O 1000 mL, pH 7.2) was used. To detect the phosphate solubilization ability, the Molybdenum antimony colorimetric method was employed. To detect the ability to generate biofilms, the Giemsa staining method was used. After removing loose soil from the roots, a thin layer of soil attached to the alfalfa root surface was collected as rhizosphere soil. The 3,5−dinitrosalicylate colorimetric method was used to determine sucrase activity at 508 nm. Sodium phenolate was used to determine urease activity at 578 nm . The electrical conductivity (EC) of the saturated soil extract (soil:water = 1:5, w / v ) was determined using an EC meter (DDS–11A, Shang Hai Yoke Instrument Co., Ltd., Shanghai, China). The pH values were measured using the saturated soil extract with a pH meter . The rhizosphere soil was the same as that used for 16S rRNA gene sequencing. The roots used for RNA sequencing were the same as those used for physiological determination in 4.3. The total RNA was extracted using the TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer’s protocol. The procedures of library construction, RNA sequencing, and RT−qPCR referred to . For RT−qPCR, three independent experiments per sample and three replicates per experiment were used. The primer sequences are listed in . Raw transcriptome reads were deposited in the NCBI Sequence Read Archive (SRA) database (PRJNA1025112). The rhizosphere soils of one−month−old alfalfa plants were collected. The procedure of 16S rRNA gene sequencing referred to . Three replicates per sample were used. Briefly, microbial genomic DNA was extracted using the TruSeqTM DNA Sample Prep Kit (Illumina, San Diego, CA, USA), and the hypervariable region (V3–V4) of the bacterial 16S rRNA genes was amplified using the primers 338F and 806R in an GeneAmp ® 9700 PCR thermocycler (Applied Biosystems, CA, USA). The PCR products were purified using an AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA) and quantified using a Quantus™ Fluorometer (Promega, Beijing, China). Purified amplicons were pooled in equimolar amounts and paired−end−sequenced on an Illumina MiSeq PE300 platform (Illumina, San Diego, CA, USA) according to the standard protocols of Majorbio Bio−Pharm Technology Co., Ltd. (Shanghai, China). Raw reads were deposited in the NCBI Sequence Read Archive (SRA) database (PRJNA1028144). The phylogenetic tree was constructed by selecting the Bacillus strains in NCBI database with the highest identity of 16S rRNA sequences to those of AD13−4. MEGA X was used to construct the phylogenetic tree. For transcriptomic analysis, alfalfa RNA was extracted according to the protocol using CTAB-PBIOZOL reagent. After identifying and quantifying total RNA using a bioanalyzer (Thermo Fisher Scientific, MA, USA), the mRNA was purified using Oligo(dT) magnetic beads. The cDNA was generated by a reverse transcription kit and then amplified by PCR. The products were purified using Agencourt Ampure XP Beads (A63882, BECKMAN), and then dissolved with EB solution. The double-stranded PCR products were denatured and cycled using a cleaver oligonucleotide sequence. The single-stranded circular DNA (ssCir DNA) format was used for the final library. The library was sequenced on the Illumina HiSeq X Ten platform, and the raw data were filtered using fastp (v 0.23.2), mainly to remove reads with adapters. HISAT was used to locate the clean reads to the alfalfa genome (‘Zhongmu No.1’ alfalfa). FPKM was calculated based on the length of each gene and the number of reads mapped to the genes. The DESeq2 R package (v. 1.22.1, |log 2 foldchang| ≥ 1 and FDR < 0.05) was used to analyze the differential expression between each two groups, and the p value was corrected using the Benjamini and Hochberg method . The enrichment analysis was performed based on the hypergeometric test. For KEGG, the hypergeometric distribution test was performed with the unit of the pathway; for GO, it was performed based on the GO term . For rhizospheric microbiota analysis, paired–end reads from the original DNA fragments were merged using FLASH (v 1.2.11, default). Paired-end reads were assigned to each sample according to the unique barcodes. Sequence analysis was performed using the UPARSE software package (v 7.0.1090, http://drive5.com/uparse/ , accessed on 20 March 2024) with the UPARSE–OTU and UPARSE–OTUref algorithms. Sequences with ≥97% similarity were assigned to the same OTUs. Representative sequences were picked for OTUs, and the RDP classifier was used to annotate taxonomic information for each representative sequence. In order to compute alpha diversity, we rarified the OTU table and calculated three metrics: Chao1 estimated the species abundance; Observed Species estimated the amount of unique OTUs found in each sample; and the Shannon index was calculated. Cluster analysis was preceded by principal component analysis (PCA), which was applied to reduce the dimensions of the original variables using the QIIME software package (v 1.9.0, default). QIIME was used to calculate both the weighted and unweighted unifrac distance, which are phylogenetic measures of beta diversity. Unweighted unifrac distance was used for principal coordinate analysis (PCoA) and the unweighted pair group method with arithmetic mean (UPGMA) clustering. To confirm differences in the abundances of individual taxonomy between the two groups, Metastats software (v 1.0, default) was utilized. LEfSe was used for the quantitative analysis of biomarkers within different groups to provide biological class explanations to establish statistical significance, biological consistency, and effect–size estimation of the predicted biomarkers . Our results show that Bacillus altitudinis AD13−4 can significantly improve plant tolerance to alkaline–sodic stress by improving antioxidant capacities, endogenous phytohormone levels, cell division activity, and cell differentiation. Transcriptome analyses indicate that metabolism/secondary metabolism, signaling, photosynthesis, redox reaction, and plant–pathogen interaction pathways were significantly altered under alkaline–sodic stress and the application of strain AD13−4. Consistent with this, the contents of many metabolites and secondary metabolites, e.g., proline, phenolics, flavonoids, and terpenes, which are crucial for plant development and adaptation to the environment, changed significantly. Our results, to some extent, elucidate the mechanism underlying the interaction between Bacillus altitudinis AD13−4 and alfalfa roots under alkaline–sodic stress and provide clues for improving bacterial strains for biofertilizer.
Cross-cultural adaptation and validation of the short nutritional literacy scale for young adults (18-35years) and analysis of the influencing factors
f22965bd-73b0-4a12-a5e0-d7e7509547b5
11312225
Health Literacy[mh]
With the growth of the social economy and the increase in residents’ income, Chinese residents’ total amount and structure of food consumption have undergone significant changes, and dietary patterns and diet-related behaviors have become more diversified and modernized . Although the nutritional health of China’s nationals has continued to improve, the problem of malnutrition still exists, and the incidence of nutrition-related chronic diseases, such as high blood pressure and hyperlipidemia, is also on the rise . Advocating appropriate diets, promoting dietary health, and accurately assessing individual nutritional literacy have become topics of concern. Nutritional literacy (NL) is a multidimensional concept that refers to an individual’s ability to acquire, understand, process, and apply nutrition information and is usually classified as functional, interactive, and critical nutrition literacy. NL is the most basic, cost-effective, and practical measure to promote healthy nutrition, which is of great worth in promoting the nation’s health . Good nutritional literacy leads to good eating habits, improved dietary quality, healthier dietary choices, and improved nutritional status, and enables the prevention and control of nutrition-related non-communicable diseases . With the widespread use of social media and electronic products, dietary and nutritional information is becoming more and more predominantly available through online media . Several health organizations and individuals often post on social media on topics related to food, nutrition, and health . These health messages from the mass media can have a significant impact on an individual’s subjective health status . However, social media content about health and nutrition may harm individuals’ nutritional perceptions and food choices . This requires users to look critically at nutritional information from various sources and to screen and judge the information. With social media’s popularity, the younger generation has become its primary user group. However, this group faces the dual challenges of declining diet quality and growing obesity in China. In particular, adults between 18 and 35 are seeing their dietary fat intake rise while their protein and carbohydrate intake decreases, and they are at risk of inadequate mineral intake . These phenomena may be related to young people’s dislike of home cooking, frequent breakfast skipping, and preference for fast food . Unhealthy lifestyles and diets in this group may not show immediate adverse effects on health but can increase the risk of chronic diseases . Given this, it is particularly urgent to enhance the nutritional literacy of young people and improve their eating habits through nutrition education and management . Before providing specialized nutritional care and education services to specific groups, it is crucial to have an in-depth understanding of their perceived level of nutritional literacy. This study aimed to culturally adapt the S-NutLit scale and explore the key factors influencing nutritional literacy. We expect that this study will allow us to more accurately assess the nutritional literacy of adults aged 18–35 and enhance their nutritional knowledge to promote the formation of healthy living habits. Nutritional literacy levels affect people’s health status. Understanding the causes of inadequate nutrient literacy is essential to reducing the burden of chronic non-communicable nutrition-related diseases . We chose demographic variables as influencing factors to analyze to understand NL among young people. Participants and study design The study was implemented using a convenience sampling method from December 2023 to March 2024. According to the definition of young adults in China’s Medium-and Long-Term Youth Development Plan (2016–2025), we limited the age of participants to 18–35. Therefore, the main inclusion criteria for participants in this study were 18–35 and voluntary participation. According to the sample estimation method, the sample size should be 5–10 times the number of scale entries . The sample size was calculated based on the 11 entries in the original S-NutLit scale. The sample size should be between 55 and 110 cases, plus 20% of invalid questionnaires; the inclusion sample should be 66 to 132 cases. A larger sample size is desirable for accuracy, so 508 young people were finally selected to participate in the questionnaire in this study. Translation, counter-translation, and cross-cultural adaptation of the S-NutLit scale The S-NutLit Scale was translated and adapted into English with the permission of Dr. Christophe Matthys . The scale was translated using Brislin’s Two-Way Translation method. First, two native Chinese-speaking college English teachers and graduate students majoring in English with study abroad experience translated the S-NutLit scale into Chinese. The initial Chinese translation was then retranslated into two English versions by two bilinguals without access to the original English questionnaire from linguistic and professional perspectives. The translated scale was then integrated and debugged by a nutritionist fluent in English with experience in related topics. Next, a committee of 5 experts was formed to conduct the cultural debugging, reviewing the questionnaire and judging the appropriateness of each question. These 5 experts included 2 nutritionists, 2 clinical nurse specialists, and 1 English professor. The criteria for selecting the experts were: (1) extensive expertise in nutrition or nursing. (2) familiarity with the step-by-step and flow of tonal translation. (3) a graduate degree and work experience. The original scale was revised, considering the opinions of experts and the current situation. Four points in the original scale were modified to make the scale entries more applicable to the Chinese youth population. Delete “fair trade coffee” from entry two as one of the examples of sustainable nutrition. This is because the meaning of “fair trade coffee” is unfamiliar to domestic consumers , and no examples of sustainable nutrition are particularly relevant to its meaning. In entry 3, the original scale, “Flemish Food Triangle,” is a Belgian educational model depicting an inverted pyramid of dietary guidelines . It is similar in meaning and function to the Chinese balanced diet pagoda. Therefore, the “Flemish Food Triangle” was changed to “Chinese balanced diet pagoda.” To simplify the formulation, replace “I can distinguish between reliable and less reliable websites” in entry 5 with “I can tell if a website is reliable.” In Entry 7, to be more in line with the expression habits of Chinese people, change “I have the necessary skills to apply nutrition information when cooking.” to “I can apply nutrition information when cooking.” Fifty young people were invited to fill out a pre-survey questionnaire to assess the clarity and comprehensibility of the items after cultural adaptation of the above four parts of the original scale. The Chinese version of the S-NutLit Scale was developed after listening to the opinions and suggestions of all parties. Questionnaire design It consists of general information and the Chinese version of S-NutLit. The general information is self-designed for a total of 15 items, respectively: age, gender, BMI, ethnicity, marriage status, education level, occupation, usual place of residence, monthly income of the family, educational level of the father, academic level of the mother, whether or not they had taken a nutrition-related course, whether or not they had any chronic diseases such as diabetes, hypertension, and so on, how often they paid attention to nutritional information, and their self-assessment of their level of health. A detailed questionnaire can be found in Supplementary Material 1. The S-NutLit scale Dr. Jules Vrinten and colleagues developed the S-NutLit scale . The scale has two dimensions: information skills and expert skills. It is scored on a Likert-type scale ranging from 1 to 5, with an additional” Additional answer option” for entry 7, which is not included in the total score. Higher scores indicate higher nutritional literacy among young people. The original scale was reliable and valid, with a Cronbach’s alpha of 0.80. Data collection We used a convenience sampling method to recruit participants through an online survey service platform. The participants were mainly in China’s Liaoning, Shandong, and Hunan provinces. The researcher explained the purpose of the survey to the participants, distributed the electronic version of the questionnaire, and informed them of the precautions to take when completing the questionnaire. After rigorous screening and sorting, 508 questionnaires were collected. Data were entered in pairs to ensure accuracy and completeness. Two weeks later, 50 survey respondents were randomly selected from the participants to assess the retest reliability of the scale. Statistical analysis Statistical description of general information was done through frequencies and percentages. Item analysis of the scale was performed using the correlation coefficient method and the critical ratio (CR). Validity analysis was conducted using content validity and structural validity. Internal consistency reliabilities and retest reliability ratings have been employed in reliability analysis. Categorical variables were subjected to independent samples t-test or one-way ANOVA. After screening for statistically significant variables ( P < 0.05), multiple linear regression was performed to screen for factors that could impact young people’s nutritional literacy. Item analysis CR is an independent samples t-test for high grouping (highest 27%) and low grouping (lowest 27%) to assess the discriminant properties of the scale . Entries with a critical ratio greater than three and statistically significant differences between the high and low subgroups were retained. Correlations between entries and overall scores were examined to assess the homogeneity of entries. Retaining entries with correlation values ≥ 0.4 . Validity analysis Content validity Six experts in the fields of nutrition (3), nursing (1), public health (1), and psychology (1) were invited to form an expert committee to conduct the content validity analysis. All experts held intermediate or higher-level titles and had at least five years of work experience in their respective fields. They possessed solid professional skills and showed high motivation to participate in this study. The six experts assessed the scale using the Item-Level Content Validity Index (I-CVI) versus the Scale-Level Content Validity Index/Average (S-CVI/Ave). In general, the content validity of a scale is considered good when S-CVI/Ave ≥ 0.90 and I-CVI ≥ 0.78 . Construct validity After that, we randomly assigned the 508 samples to form two groups of the same size. The first group was used to conduct Exploratory Factor Analysis (EFA), and the second was used to conduct Factor Analysis (CFA). EFA was generally considered appropriate when the Kaiser-Meyer-Olkin (KMO) value was ≥ 0.6, and Bartlett’s test of sphericity was P < 0.05 . EFA reflects how much a scale can measure a psychometric trait or a theoretically constructed construct . The study used principal component analysis and maximum variance orthogonal rotation. The number of dimensions was determined using eigenvalues > 1 and a scree plot . Cumulative contributions greater than 50% were generally considered desirable , and items with factor loadings greater than 0.4 were retained . CFA was used to explore the consistency of the EFA-constructed framework with the actual situation , and to evaluate the fit and applicability of the model using the comparative fit index (CFI > 0.9), goodness-of-fit index (GFI > 0.9), Tucker–Lewis Index (TLI > 0.9), root mean square of the error of approximation (RMSEA < 0.08), and chi-square degrees of freedom ratio (χ 2 /df ≤ 3) . Standardized factor loadings were used to calculate the average variance extracted (AVE) and CR values. The AVE values were used to assess the convergent validity of the model, and the CR values were used to determine its composite reliability. AVE greater than 0.36 was considered acceptable, more significant than 0.5 was considered desirable, and a CR value greater than 0.7 indicated that the scale had adequate internal consistency . Finally, the discriminant validity of the model was judged using the heterotrait-monotrait ratio (HTMT). The model has good discriminant validity if HTMT is less than 0.85 . Reliability analysis Reliability tests were evaluated using Cronbach’s alpha coefficient and retest reliability. Homogeneity and intrinsic correlation between the items of the Chinese version of the S-NutLit scale were assessed using Cronbach’s alpha coefficient, which was at least 0.7 . Two weeks later, a sample of 50 cases was randomly selected from the previous participants for repeated measurements. The interclass correlation coefficient (ICC) was calculated, and an ICC > 0.7 indicated good scale stability . The study was implemented using a convenience sampling method from December 2023 to March 2024. According to the definition of young adults in China’s Medium-and Long-Term Youth Development Plan (2016–2025), we limited the age of participants to 18–35. Therefore, the main inclusion criteria for participants in this study were 18–35 and voluntary participation. According to the sample estimation method, the sample size should be 5–10 times the number of scale entries . The sample size was calculated based on the 11 entries in the original S-NutLit scale. The sample size should be between 55 and 110 cases, plus 20% of invalid questionnaires; the inclusion sample should be 66 to 132 cases. A larger sample size is desirable for accuracy, so 508 young people were finally selected to participate in the questionnaire in this study. The S-NutLit Scale was translated and adapted into English with the permission of Dr. Christophe Matthys . The scale was translated using Brislin’s Two-Way Translation method. First, two native Chinese-speaking college English teachers and graduate students majoring in English with study abroad experience translated the S-NutLit scale into Chinese. The initial Chinese translation was then retranslated into two English versions by two bilinguals without access to the original English questionnaire from linguistic and professional perspectives. The translated scale was then integrated and debugged by a nutritionist fluent in English with experience in related topics. Next, a committee of 5 experts was formed to conduct the cultural debugging, reviewing the questionnaire and judging the appropriateness of each question. These 5 experts included 2 nutritionists, 2 clinical nurse specialists, and 1 English professor. The criteria for selecting the experts were: (1) extensive expertise in nutrition or nursing. (2) familiarity with the step-by-step and flow of tonal translation. (3) a graduate degree and work experience. The original scale was revised, considering the opinions of experts and the current situation. Four points in the original scale were modified to make the scale entries more applicable to the Chinese youth population. Delete “fair trade coffee” from entry two as one of the examples of sustainable nutrition. This is because the meaning of “fair trade coffee” is unfamiliar to domestic consumers , and no examples of sustainable nutrition are particularly relevant to its meaning. In entry 3, the original scale, “Flemish Food Triangle,” is a Belgian educational model depicting an inverted pyramid of dietary guidelines . It is similar in meaning and function to the Chinese balanced diet pagoda. Therefore, the “Flemish Food Triangle” was changed to “Chinese balanced diet pagoda.” To simplify the formulation, replace “I can distinguish between reliable and less reliable websites” in entry 5 with “I can tell if a website is reliable.” In Entry 7, to be more in line with the expression habits of Chinese people, change “I have the necessary skills to apply nutrition information when cooking.” to “I can apply nutrition information when cooking.” Fifty young people were invited to fill out a pre-survey questionnaire to assess the clarity and comprehensibility of the items after cultural adaptation of the above four parts of the original scale. The Chinese version of the S-NutLit Scale was developed after listening to the opinions and suggestions of all parties. It consists of general information and the Chinese version of S-NutLit. The general information is self-designed for a total of 15 items, respectively: age, gender, BMI, ethnicity, marriage status, education level, occupation, usual place of residence, monthly income of the family, educational level of the father, academic level of the mother, whether or not they had taken a nutrition-related course, whether or not they had any chronic diseases such as diabetes, hypertension, and so on, how often they paid attention to nutritional information, and their self-assessment of their level of health. A detailed questionnaire can be found in Supplementary Material 1. Dr. Jules Vrinten and colleagues developed the S-NutLit scale . The scale has two dimensions: information skills and expert skills. It is scored on a Likert-type scale ranging from 1 to 5, with an additional” Additional answer option” for entry 7, which is not included in the total score. Higher scores indicate higher nutritional literacy among young people. The original scale was reliable and valid, with a Cronbach’s alpha of 0.80. We used a convenience sampling method to recruit participants through an online survey service platform. The participants were mainly in China’s Liaoning, Shandong, and Hunan provinces. The researcher explained the purpose of the survey to the participants, distributed the electronic version of the questionnaire, and informed them of the precautions to take when completing the questionnaire. After rigorous screening and sorting, 508 questionnaires were collected. Data were entered in pairs to ensure accuracy and completeness. Two weeks later, 50 survey respondents were randomly selected from the participants to assess the retest reliability of the scale. Statistical description of general information was done through frequencies and percentages. Item analysis of the scale was performed using the correlation coefficient method and the critical ratio (CR). Validity analysis was conducted using content validity and structural validity. Internal consistency reliabilities and retest reliability ratings have been employed in reliability analysis. Categorical variables were subjected to independent samples t-test or one-way ANOVA. After screening for statistically significant variables ( P < 0.05), multiple linear regression was performed to screen for factors that could impact young people’s nutritional literacy. CR is an independent samples t-test for high grouping (highest 27%) and low grouping (lowest 27%) to assess the discriminant properties of the scale . Entries with a critical ratio greater than three and statistically significant differences between the high and low subgroups were retained. Correlations between entries and overall scores were examined to assess the homogeneity of entries. Retaining entries with correlation values ≥ 0.4 . Content validity Six experts in the fields of nutrition (3), nursing (1), public health (1), and psychology (1) were invited to form an expert committee to conduct the content validity analysis. All experts held intermediate or higher-level titles and had at least five years of work experience in their respective fields. They possessed solid professional skills and showed high motivation to participate in this study. The six experts assessed the scale using the Item-Level Content Validity Index (I-CVI) versus the Scale-Level Content Validity Index/Average (S-CVI/Ave). In general, the content validity of a scale is considered good when S-CVI/Ave ≥ 0.90 and I-CVI ≥ 0.78 . Construct validity After that, we randomly assigned the 508 samples to form two groups of the same size. The first group was used to conduct Exploratory Factor Analysis (EFA), and the second was used to conduct Factor Analysis (CFA). EFA was generally considered appropriate when the Kaiser-Meyer-Olkin (KMO) value was ≥ 0.6, and Bartlett’s test of sphericity was P < 0.05 . EFA reflects how much a scale can measure a psychometric trait or a theoretically constructed construct . The study used principal component analysis and maximum variance orthogonal rotation. The number of dimensions was determined using eigenvalues > 1 and a scree plot . Cumulative contributions greater than 50% were generally considered desirable , and items with factor loadings greater than 0.4 were retained . CFA was used to explore the consistency of the EFA-constructed framework with the actual situation , and to evaluate the fit and applicability of the model using the comparative fit index (CFI > 0.9), goodness-of-fit index (GFI > 0.9), Tucker–Lewis Index (TLI > 0.9), root mean square of the error of approximation (RMSEA < 0.08), and chi-square degrees of freedom ratio (χ 2 /df ≤ 3) . Standardized factor loadings were used to calculate the average variance extracted (AVE) and CR values. The AVE values were used to assess the convergent validity of the model, and the CR values were used to determine its composite reliability. AVE greater than 0.36 was considered acceptable, more significant than 0.5 was considered desirable, and a CR value greater than 0.7 indicated that the scale had adequate internal consistency . Finally, the discriminant validity of the model was judged using the heterotrait-monotrait ratio (HTMT). The model has good discriminant validity if HTMT is less than 0.85 . Six experts in the fields of nutrition (3), nursing (1), public health (1), and psychology (1) were invited to form an expert committee to conduct the content validity analysis. All experts held intermediate or higher-level titles and had at least five years of work experience in their respective fields. They possessed solid professional skills and showed high motivation to participate in this study. The six experts assessed the scale using the Item-Level Content Validity Index (I-CVI) versus the Scale-Level Content Validity Index/Average (S-CVI/Ave). In general, the content validity of a scale is considered good when S-CVI/Ave ≥ 0.90 and I-CVI ≥ 0.78 . After that, we randomly assigned the 508 samples to form two groups of the same size. The first group was used to conduct Exploratory Factor Analysis (EFA), and the second was used to conduct Factor Analysis (CFA). EFA was generally considered appropriate when the Kaiser-Meyer-Olkin (KMO) value was ≥ 0.6, and Bartlett’s test of sphericity was P < 0.05 . EFA reflects how much a scale can measure a psychometric trait or a theoretically constructed construct . The study used principal component analysis and maximum variance orthogonal rotation. The number of dimensions was determined using eigenvalues > 1 and a scree plot . Cumulative contributions greater than 50% were generally considered desirable , and items with factor loadings greater than 0.4 were retained . CFA was used to explore the consistency of the EFA-constructed framework with the actual situation , and to evaluate the fit and applicability of the model using the comparative fit index (CFI > 0.9), goodness-of-fit index (GFI > 0.9), Tucker–Lewis Index (TLI > 0.9), root mean square of the error of approximation (RMSEA < 0.08), and chi-square degrees of freedom ratio (χ 2 /df ≤ 3) . Standardized factor loadings were used to calculate the average variance extracted (AVE) and CR values. The AVE values were used to assess the convergent validity of the model, and the CR values were used to determine its composite reliability. AVE greater than 0.36 was considered acceptable, more significant than 0.5 was considered desirable, and a CR value greater than 0.7 indicated that the scale had adequate internal consistency . Finally, the discriminant validity of the model was judged using the heterotrait-monotrait ratio (HTMT). The model has good discriminant validity if HTMT is less than 0.85 . Reliability tests were evaluated using Cronbach’s alpha coefficient and retest reliability. Homogeneity and intrinsic correlation between the items of the Chinese version of the S-NutLit scale were assessed using Cronbach’s alpha coefficient, which was at least 0.7 . Two weeks later, a sample of 50 cases was randomly selected from the previous participants for repeated measurements. The interclass correlation coefficient (ICC) was calculated, and an ICC > 0.7 indicated good scale stability . Descriptive statistics A total of 508 participants were recruited for analysis in this study. Among them, 308 (60.6%) were 18–25 years old, 293 (57.7%) were female; BMI was in the normal range in a total of 342 (67.3%), and unmarried was 340 (66.9%) in marital status. Table shows the specific general information data. The skewness and kurtosis values are between − 2 and 2, consistent with a normal distribution. (See Table ). Item analysis In this study, the S-NutLit scale had 11 entries with scores between 10 and 55, with scores ≤ 32 for low grouping and ≥ 39 for high grouping. The CR ranged from 10.070 to 18.545, with good differentiation. The correlation coefficients between the entries and the overall scale score ranged from 0.443 to 0.664, which suggests that the individual entries correlate with the scale as a whole. The Cronbach’s alpha of the scale was not exceeded after deleting the entries (Cronbach’s alpha = 0.826; Table ). Validity analysis Content validity analysis The results of the content validity measures by the six specialists were I-CVI between 0.833 and 1.000 and S-CVI of 0.908. The content validity was reasonable and within the acceptable range. Exploratory factor analysis Structural validity was analyzed using two dimensions and 11 entries of the original scale, with a scale KMO value of 0.857 and Bartlett’s test of sphericity of 768.620 ( p <0.001). This indicates that exploratory factor analysis can be continued next. In Fig. , the downward trend slows down after the 3rd point, so the two factors in the original scale are desirable. The cumulative variance contribution is 51.029%. Otherwise, loads of the coefficients are within the normal range, as shown in Table . Validation factor analysis The validation factor analysis showed that the model had CFI = 0.964, GFI = 0.950, TLI = 0.954, RMSEA = 0.053, and χ 2 /df = 1.720. The model was generally well-fitted, and the model fit data were ideal. Consistent with the original scale, the translated S-NutLit scale has two dimensions, information skills and expert skills, and 11 entries. ( See Fig. ). Convergent validity and discriminant validity The AVE values for the two dimensions of the model are 0.420 and 0.515, greater than 0.36, and the composite reliability values are 0.852 and 0.760, greater than 0.7. These indicate that the model has good convergent validity and composite reliability. The HTMT value is 0.515, less than 0.85, indicating the model has good discriminant validity (Table ). Reliability analysis The Cronbach’s alpha coefficient for the total scale was 0.826, and the Cronbach’s alpha coefficients for the two dimensions were 0.825 and 0.732, indicating that the scale’s and the dimensions’ reliability was good. The scale’s retest reliability was calculated in 50 randomly selected cases among the participants. The results showed that the retest reliability of this scale was 0.818, and the scale was stable for repeatable measurement. Single-factor analysis of Young people’s nutritional literacy The results of the univariate analysis of variance showed statistically significant differences in young people’s nutritional literacy by level of education, monthly family income, father’s educational level, mother’s educational level, whether or not they had taken a nutrition-related course, and how often they paid attention to nutritional and health information ( P < 0.05). The Bonferroni test further examined them, as shown in Table . Multiple linear regression analysis of young people’s nutritional literacy Multiple stepwise linear regression analyses showed that the level of education, the mother’s education, whether or not they had received nutrition-related courses, and the frequency of attention to nutritional information were potential influences on S-NutLit. The variance inflation factor (VIF) of all variables in the covariance diagnostic was less than 5, indicating no multicollinearity among the variables. ( See Table ). A total of 508 participants were recruited for analysis in this study. Among them, 308 (60.6%) were 18–25 years old, 293 (57.7%) were female; BMI was in the normal range in a total of 342 (67.3%), and unmarried was 340 (66.9%) in marital status. Table shows the specific general information data. The skewness and kurtosis values are between − 2 and 2, consistent with a normal distribution. (See Table ). In this study, the S-NutLit scale had 11 entries with scores between 10 and 55, with scores ≤ 32 for low grouping and ≥ 39 for high grouping. The CR ranged from 10.070 to 18.545, with good differentiation. The correlation coefficients between the entries and the overall scale score ranged from 0.443 to 0.664, which suggests that the individual entries correlate with the scale as a whole. The Cronbach’s alpha of the scale was not exceeded after deleting the entries (Cronbach’s alpha = 0.826; Table ). Content validity analysis The results of the content validity measures by the six specialists were I-CVI between 0.833 and 1.000 and S-CVI of 0.908. The content validity was reasonable and within the acceptable range. Exploratory factor analysis Structural validity was analyzed using two dimensions and 11 entries of the original scale, with a scale KMO value of 0.857 and Bartlett’s test of sphericity of 768.620 ( p <0.001). This indicates that exploratory factor analysis can be continued next. In Fig. , the downward trend slows down after the 3rd point, so the two factors in the original scale are desirable. The cumulative variance contribution is 51.029%. Otherwise, loads of the coefficients are within the normal range, as shown in Table . Validation factor analysis The validation factor analysis showed that the model had CFI = 0.964, GFI = 0.950, TLI = 0.954, RMSEA = 0.053, and χ 2 /df = 1.720. The model was generally well-fitted, and the model fit data were ideal. Consistent with the original scale, the translated S-NutLit scale has two dimensions, information skills and expert skills, and 11 entries. ( See Fig. ). Convergent validity and discriminant validity The AVE values for the two dimensions of the model are 0.420 and 0.515, greater than 0.36, and the composite reliability values are 0.852 and 0.760, greater than 0.7. These indicate that the model has good convergent validity and composite reliability. The HTMT value is 0.515, less than 0.85, indicating the model has good discriminant validity (Table ). The results of the content validity measures by the six specialists were I-CVI between 0.833 and 1.000 and S-CVI of 0.908. The content validity was reasonable and within the acceptable range. Structural validity was analyzed using two dimensions and 11 entries of the original scale, with a scale KMO value of 0.857 and Bartlett’s test of sphericity of 768.620 ( p <0.001). This indicates that exploratory factor analysis can be continued next. In Fig. , the downward trend slows down after the 3rd point, so the two factors in the original scale are desirable. The cumulative variance contribution is 51.029%. Otherwise, loads of the coefficients are within the normal range, as shown in Table . The validation factor analysis showed that the model had CFI = 0.964, GFI = 0.950, TLI = 0.954, RMSEA = 0.053, and χ 2 /df = 1.720. The model was generally well-fitted, and the model fit data were ideal. Consistent with the original scale, the translated S-NutLit scale has two dimensions, information skills and expert skills, and 11 entries. ( See Fig. ). The AVE values for the two dimensions of the model are 0.420 and 0.515, greater than 0.36, and the composite reliability values are 0.852 and 0.760, greater than 0.7. These indicate that the model has good convergent validity and composite reliability. The HTMT value is 0.515, less than 0.85, indicating the model has good discriminant validity (Table ). The Cronbach’s alpha coefficient for the total scale was 0.826, and the Cronbach’s alpha coefficients for the two dimensions were 0.825 and 0.732, indicating that the scale’s and the dimensions’ reliability was good. The scale’s retest reliability was calculated in 50 randomly selected cases among the participants. The results showed that the retest reliability of this scale was 0.818, and the scale was stable for repeatable measurement. The results of the univariate analysis of variance showed statistically significant differences in young people’s nutritional literacy by level of education, monthly family income, father’s educational level, mother’s educational level, whether or not they had taken a nutrition-related course, and how often they paid attention to nutritional and health information ( P < 0.05). The Bonferroni test further examined them, as shown in Table . Multiple stepwise linear regression analyses showed that the level of education, the mother’s education, whether or not they had received nutrition-related courses, and the frequency of attention to nutritional information were potential influences on S-NutLit. The variance inflation factor (VIF) of all variables in the covariance diagnostic was less than 5, indicating no multicollinearity among the variables. ( See Table ). Malnutrition remains prominent in China, and to enhance dynamic monitoring of nutritional characteristics and develop effective nutritional improvement strategies at the individual level . In this study, the S-NutLit was linguistically translated and culturally adapted to assess the psychometric properties of a sample of young adults and to analyze the factors influencing young people’s nutritional literacy. The translated edition of S-NutLit has the same number of items and factor construction as the original English version, with two dimensions (information skills and expert skills) and 11 entries . The scale was used for the first time in a Chinese population with excellent validity and reliability. It can accurately evaluate the nutritional literacy of adolescents and provide support for nutritional monitoring. When we compared the debugged back-translated version with the original version, we found some differences between the two versions. These differences are found in entries 5 and 7 and are mainly related to the language’s unique grammatical and syntactic rules. When discrepancies are found, the back-translator provides the translator with a detailed explanation of the differences between the two versions. Based on the discussion, the translator modifies the discrepant items. The back-translator then translates the modifications from Chinese to English. This process was continued until the scale items in the two English versions had the same meaning. In addition, we found that domestic consumers are unfamiliar with the concept of “fair trade coffee” , so we tried to replace it with a similar idea to avoid unnecessary trouble caused by direct translation. However, we have not yet found a concept that can fully replace “fair trade coffee” and reflect the principles of sustainable nutrition. As an example of sustainable nutrition in Entry 2, its removal would have a lesser impact. So, to minimize inaccurate scoring due to comprehension bias, the experts recommended removing this example from the scale. Such an adjustment would ensure the scale’s accuracy and applicability while maintaining the fluency of the assessment process and respondent participation. When a participant selected the “additional answer option” for entry 7, it indicated that this participant did not apply to entry 7. Very few participants (9.8%) in our study selected the “additional answer option”. In all subsequent exploratory and validation factor analyses, we treated this response as a missing value and did not include it in the analyses . The Chinese version of the S-NutLit scale has an S-CVI of 0.955 and an I-CVI of 0.833 to 1.00, both greater than 0.8. The content covered by the scale can reflect the concept of NL. Consistent with the original scale, two male factors were extracted, and the cumulative variance contribution based on the two factors in this study was 51.029%. This result suggests that the individual entries in the scale have good explanatory power for interpreting young people’s NL. The fitted data in the validated factor analysis were ideal and had good construct validity. Cronbach’s alpha coefficients enable the evaluation of scale quality . The Cronbach’s alpha coefficient of the Chinese version of the S-NutLit scale is 0.826, the Cronbach’s alpha coefficient of the original scale is 0.80, and the dimensions of “information skills” and “expert skills” are 0.83 and 0.79, indicating that the S-NutLit has a high internal reliability. The AVE for the “information skills” dimension is 0.420, related to the additional answer option in entry 7. Its presence impacted the scale’s validity, which was still acceptable despite the low AVE value. Retest reliability was conducted after two weeks, and the result of the retest reliability in this study was 0.818, while the retest reliability of the original scale was 0.74. The Chinese version of the S-NutLit has higher retest reliability, indicating that it can reliably and stably measure young adults’ nutritional literacy. In short, the Chinese version of the S-NutLit scale can effectively measure NL in young adults and be further applied in future clinical practice. The mean score ± standard deviation of nutritional literacy in this study was 35.31 ± 6.815, which puts the nutritional literacy of young Chinese adults at an intermediate level. The findings are consistent with those of two earlier studies conducted in China , suggesting that the nutritional literacy of young people still needs to be further improved. The results of this study show that young people’s level of education, whether or not they had taken a nutrition-related course, their mother’s level of education, and the frequency of attention to nutritional health information entered the regression equation. These four variables are suggested to be factors affecting young people’s nutritional literacy. The present study found that the level of education was related to the nutritional literacy of individuals and that there was a significant difference in nutritional literacy between young people’s postgraduate qualifications and other levels of education. Similar conclusions have been made in other research, concluding that people with more education will perform better regarding eating behaviors and that low education is a barrier to nutritional literacy . Educational level predicts disease risk, health behavior patterns, and diet quantity more accurately than other socioeconomic factors . This may be because individuals at higher education levels have a higher capacity to acquire knowledge and skills and are better able to understand, process, and apply the nutrition information acquired . Higher-educated people are more likely to have access to knowledge and data regarding diet and wellness . In China, mothers are primarily responsible for taking care of and educating their children and devoting more energy and time to family life, significantly influencing their words and actions in daily life. Mothers with higher levels of education generally have more significant health awareness and nutritional knowledge, which they are more likely to pass on to their children, thereby increasing their children’s understanding and ability to improve their health . Nutritional literacy is essential in food education programs that promote healthy eating habits and general health in individuals . The results of this study are consistent with previous studies, which suggest that nutrition education can be effective in improving individuals’ nutritional literacy levels . Nutrition knowledge and literacy levels are interrelated and positively correlated , and school-based nutrition education can enhance students’ nutrition knowledge and skills . Nutritional knowledge influences individuals’ perceptions and choices of food and can motivate individuals to choose foods of excellent nutritional value . To improve individuals’ nutritional literacy, nutritional courses in schools and online instructional videos posted through official social media accounts are necessary. These videos can guide young people in learning about nutrition and applying it to their daily diets . In this study, the frequency of searching and browsing for nutritional health information was related to an individual’s nutritional literacy. When individuals pay frequent attention to nutritional health information, such as “often” or “always”, their nutritional literacy tends to differ significantly from those who pay less attention to such information. Literacy is a gradual process, and individuals who pay regular attention to nutrition and health information have a wealth of nutritional knowledge and a better understanding of nutritional concepts and the impact of food choices on health . Being updated on nutritional knowledge can motivate individuals to adjust their unscientific eating habits and adopt more scientific food choices. Encouraging and facilitating young people to pay regular attention to nutritional health information is an effective way to improve nutritional literacy. Although several existing commonly used scales play an essential role in assessment and research, they inevitably have some limitations. The Nutrition Literacy Assessment Instrument (NLAI) does not measure the ability to critically view nutrition literacy and apply nutrition knowledge . The Nutritional Literacy Scale (NLS) primarily assesses the respondent’s understanding of nutritional information . The Chinese Health Literacy Scale for Low Salt Consumption - Hong Kong population (CHLSalt-HK) assesses health literacy related to low salt intake using older adults as the target population . The Nutrition Literacy Measurement Scale for Chinese Adults (NLMSC) is primarily intended for the adult public . However, nutritional literacy can be significantly affected by multiple factors such as social environment and economic conditions, and there may be significant differences in these aspects among individuals of different ages, which makes the scale lack a certain degree of relevance and adaptability in its application. The S-NutLit Scale contains entries on three levels: functional, interactive, and critical, which are used to identify and assess nutritional literacy comprehensively. It is designed to determine the nutritional literacy of young people and is more relevant than other scales. The scale is concise and has good reliability and validity, which provides healthcare professionals with a more convenient and accurate tool to assess the nutritional literacy of young people. Limitations Convenience sampling, which is convenient, flexible, and cost-effective, was used in this study. However, the arbitrariness of convenience sampling in determining the sample may lead to selection bias. In this study, this bias was mainly reflected in the distribution of occupational categories, with a high proportion of student respondents, totaling 276 or 54.3% of the total respondents. This may result in a less representative sample. With the popularisation and lengthening of tertiary education in China and the increasing concept of lifelong learning, young people’s graduation age tends to increase. The data for our study comes primarily from an online service platform with a large student user base. Therefore, even with the best efforts to reach young people from different fields and backgrounds, our research sample still tends to reflect the views of the student population to some extent. For practical reasons, investing more effort in recruiting young people from other occupations was impossible. To mitigate this limitation in future studies, we recommend using a stratified sampling method, whereby young people are stratified according to their occupational characteristics, and a random sample is drawn from each stratum. In addition, increasing the sample size is one of the effective strategies to reduce the impact of bias. This study mainly includes individual factors such as gender and age and family factors such as parent’s education level. However, social factors such as government nutrition policies and community nutrition awareness also impact individual nutritional literacy. In the future, social-ecological systems theory can be applied to explore how individual nutritional literacy is affected by multiple factors, such as the individual, the family, and the community. Based on this, dietary interventions can be designed and implemented more comprehensively to promote the nutritional health of individuals and communities. Convenience sampling, which is convenient, flexible, and cost-effective, was used in this study. However, the arbitrariness of convenience sampling in determining the sample may lead to selection bias. In this study, this bias was mainly reflected in the distribution of occupational categories, with a high proportion of student respondents, totaling 276 or 54.3% of the total respondents. This may result in a less representative sample. With the popularisation and lengthening of tertiary education in China and the increasing concept of lifelong learning, young people’s graduation age tends to increase. The data for our study comes primarily from an online service platform with a large student user base. Therefore, even with the best efforts to reach young people from different fields and backgrounds, our research sample still tends to reflect the views of the student population to some extent. For practical reasons, investing more effort in recruiting young people from other occupations was impossible. To mitigate this limitation in future studies, we recommend using a stratified sampling method, whereby young people are stratified according to their occupational characteristics, and a random sample is drawn from each stratum. In addition, increasing the sample size is one of the effective strategies to reduce the impact of bias. This study mainly includes individual factors such as gender and age and family factors such as parent’s education level. However, social factors such as government nutrition policies and community nutrition awareness also impact individual nutritional literacy. In the future, social-ecological systems theory can be applied to explore how individual nutritional literacy is affected by multiple factors, such as the individual, the family, and the community. Based on this, dietary interventions can be designed and implemented more comprehensively to promote the nutritional health of individuals and communities. The Chinese version of the S-NutLit Scale contains 11 entries and two dimensions with satisfactory reliability and validity. The adapted and validated S-NutLit scale is more suitable for Chinese people, and its applicability in other countries can be further explored. The low level of education of individuals and their mothers, the lack of experience in nutrition-related courses, and the infrequent attention to nutritional health information have made some young people vulnerable to low dietary literacy. More attention should be given to nutritional health issues among this group of young people. By applying the Chinese version of the S-NutLit Scale, we can more conveniently and accurately assess individuals’ nutritional literacy levels. This, in turn, supports the formulation of targeted nutrition education programs and public health policies. It also helps us more effectively identify high-risk groups and provide them with the necessary support and intervention, thus promoting the nutritional health level of the whole society.
How to Improve Surveillance Program for Shiga Toxin-Producing
fb81cb2a-d1d6-4aa0-bdf5-5306667d86b4
11206285
Microbiology[mh]
1.1. Escherichia coli E. coli is a gram-negative bacterium, facultative anaerobe, not sporogenous, belonging to the family Enterobacteriaceae . The majority of animal species’ intestinal flora contain E. coli as the primary facultative anaerobe, which is often free of pathogenicity. However, many strains have evolved pathogenetic processes that enable them to cause a variety of illnesses in both humans and animals, including some extremely serious ones . E. coli can be classified into pathotypes based on their pathogenetic profile, which considers the virulence factors, the diseases caused, and the phylogenetic profile . Among E. coli causing enteric diseases, several pathotypes have been identified, namely intestinal pathogenic E. coli (IPEC), which includes enteropathogenic E. coli (EPEC), enterohemorrhagic E. coli (EHEC), enterotoxigenic E. coli (ETEC), enteroaggregative E. coli (EAEC), diffusely adherent E. coli (DAEC), enteroinvasive E. coli (EIEC), and extraintestinal pathogenic E. coli (ExPEC), which includes uropathogenic E. coli (UPEC), neonatal meningitis E. coli (NMEC), sepsis-associated E. coli (SEPEC), avian pathogenic E. coli (APEC), and mammary pathogenic E. coli (MPEC) . Among the different pathotypes, the group represented by Shiga toxin-producing E. coli (STEC) is of particular interest. This group includes strains that produce at least one member of a class of potent cytotoxins called Shiga toxins. STEC, also called Verotoxin producing E. coli (VTEC), are named after the Shiga toxin (Stx), which is very similar to a cytotoxin produced by Shigella dysenteriae serotype 1 . Among STEC strains, those having particular pathogenicity for humans are often also referred to as enterohemorragic E. coli (EHEC). This pathotype is a zoonotic agent that causes a potentially fatal human illness whose clinical spectrum includes bloody diarrhea, hemorrhagic colitis (HC), and hemolytic uremic syndrome (HUS) . Since 1982, among STEC strains, EHEC has been a major source of food safety concern. The first strain included in this group is E. coli serotype O157: H7, which is still the most widespread EHEC serotype in the United States of America and Europe . E. coli serotype O157:H7 is mostly associated with outbreaks and sporadic cases of HC and HUS in many countries; however, non-O157 STEC have been implicated in outbreaks around the world, and the number of reported cases has steadily increased every year. The Center for Disease Control and Prevention (CDC) has identified six other O groups, besides O157, to be of growing concern for public health and that are responsible for 71% of all illnesses caused by STEC: O26, O45, O103, O111, O121, and O145 (“Big 7”) . The European Food and Safety Authority (EFSA) has identified five serogroups, O26, O103, O111, O145, O157 (“Big five”) , as being of major concern to human health in Europe. Currently, considerable attention is drawn to non-O157 STEC strains, particularly after the occurrence of a severe foodborne outbreak in 2011 in Germany caused by consumption of sprouts contaminated by STEC O104:H4 . Nevertheless, following what has been stated in the 2020 EFSA risk assessment, we must not consider the serogroup and the presence of the eae gene as predictors of pathogenicity and clinical outcomes . 1.2. Reservoirs Ruminants, especially cattle, are a major reservoir of a diverse group of STEC despite not being a source of diseases for these animals. Indeed, cattle are asymptomatic excretors of STEC, which are permanent or transient members of their normal intestinal flora . Only the gastrointestinal tract of ruminants can be considered as a reservoir for these bacteria. The outbreaks investigated from 1982 to now have highlighted how ruminants, and the bovine species in particular, are almost always involved in the transmission of these bacteria to humans . The persistence of STEC in individual animals is due to the ability of these bacteria to colonize specific portions of the gastrointestinal tract. The different interactions between the microorganism and its host influence the fecal elimination pattern: a low level (<10 3 CFU/g of feces) and short duration (<10 days) elimination occurs when colonization is limited to the rumen; low levels of elimination are also observed when colonization is extended to the cecum and colon but for longer periods (>30 days) . 1.3. Zoonotic Spillover Cattle farming is undoubtedly the major source of environmental contamination from STEC , but the pathogens have also been recovered from pigs, goats, deer, horses, dogs, and birds . Previous studies have demonstrated that STEC infections in humans should not be associated only with cattle spillover since there are several proofs of other sources of contamination, like the outbreak of STEC that occurred in Norfolk (UK), which was related to wild rabbits. The high genetic similarity between STEC strains isolated from domestic pets (dogs and cats) and cattle, and the presence of STEC in wildlife animals like red deer and psittacine birds, highlight the possibility that new reservoirs can enhance human exposition and risk of infection . Nonetheless, outbreaks of STEC are generally ascribed to the consumption of contaminated foods of bovine origin, particularly undercooked ground beef patties and unpasteurized milk. For example, in studies of retail ground beef in North America, the prevalence of STEC ranged from 9% to 36.4%, with E. coli O157 isolated from 0% to 3.7% of the samples tested . Raw milk and raw milk products are among the main food sources of STEC infection in humans; therefore, identification of pathogens at the herd level is of primary importance for public health . Fecal contamination can be considered the only relevant route to explain the presence of STEC in raw milk. Therefore, the key point in the control of these pathogens is the reduction of fecal contamination of milk . The control of the circulation of STEC on the farm is complex and involves herd management as a whole. Currently, only managerial practices aimed at limiting the presence of STEC in milk are proposed. These measures include: the limitation of the circulation of STEC within the individual farm by hygiene measures (e.g., bedding hygiene, water supplies, alleys cleaning) and the minimization of fecal contamination of milk during milking . 1.4. STEC Detection Methods In order to obtain laboratory confirmation of STEC infection, one of the following requirements needs to be fulfilled, according to the European Centre for Disease Prevention and Control (ECDC): direct detection of the nucleic acid of stx1 or stx2 gene(s) without strain isolation; isolation of nonsorbitol fermenting (NSF) E. coli O157 (without testing for Stx or stx genes); and isolation/cultivation of an E. coli strain that produces Stx or harbors the relative gene(s) . As demonstrated by Dastmalchi et al. and Renter et al. , molecular approaches for STEC identification in feces have only been used after bacterial colony isolation on specific plates (e.g., MacConkey agar) from the aforementioned matrix or after an enrichment step of the matrix; no reports of direct molecular analysis on feces samples have been reported. Indeed, after bacterial isolation, two sets of endpoint PCR or real-time PCR are needed to confirm serotype identification and to evaluate the presence of virulence factors that identify E. coli serotypes as E. coli STEC; the entire procedure is time consuming (55/60 h), even though STEC identification is very precise . Serological methods are commonly used for STEC infection diagnosis; however, even in this case, most of the analysis cannot be performed directly on the sample but requires a prior step of bacterial isolation on agar plates or at least an enrichment step . To date there are several examples of different immunological assays (e.g., traditional ELISA, lateral flow immunoassay, monoclonal antibodies) with common limitations like cross-reaction with other pathogens (i.e., Brucella abortus , Yersinia enterocolitica , Vibrio cholera , Escherichia hermanni , Citrobacter freundii , Citrobacter sedlakii , and Salmonella ) or even viruses like the two cases of norovirus outbreaks in the United States that yielded false positives for STEC infections . DNA-based methods for researching STEC have the advantage of being rapid and do not require special reagents, such as specific Shiga anti-toxin antisera, or essential equipment for the use of cell cultures. There are numerous PCR methods for the search of stx1 and stx2 genes that are capable of detecting all known Shiga toxin subtypes. These tests can be performed both on single bacterial colonies and on mixed cultures, such as enrichment media, or samples such as those shown in . 1.5. Gap Analysis Regarding milk and milk products, the World Health Organization in 2018 summarized the main critical points to be considered when establishing surveillance and control programs related to STEC infections and food contaminations. The report, and consequently most of the European national surveillance programs, focus, apart from raw meat and fresh vegetables, on raw milk and raw milk cheeses ready for retail, neglecting the assessment of the sources of contamination at the start of the dairy chain and, more importantly, not identifying the risk factors for the spread of these pathogens within the herd. This approach is the result of a lack of knowledge (gaps) in preventing the pathogen from spreading in dairy and beef herds at the beginning of the food chain, although the problems associated with STEC/EHEC foodborne disease have been recognized for several years. A recent and useful gap analysis conducted through the Discontools project highlighted the main critical points and gaps that need to be filled regarding STEC surveillance and control. The analysis can be examined on the Discontools website. Among the several issues reported, in our opinion, one of the most important is related to the epidemiologic analysis. In fact, two major gaps in the understanding of STEC epidemiology have been identified: the mechanisms of spreading the infection among herds and how animals are exposed within a farm. Strictly related to this latter issue are gaps in the diagnostic approach. Indeed, new diagnostic approaches and methods are needed to identify mainly non-O157 serotypes in carrier animals and to assess the spread of these serotypes among animals, the contamination of food for human consumption, and human risk related to these foods . 1.6. Aims of the Pilot Study The presence of STEC in animals, the severity of the disease in humans, and the role of the environment in maintaining these pathogens support the importance of this group of bacteria in a One Health framework. Moreover, the increasing number of reports on the presence of contaminated foods with STEC serotypes , the probable underestimation of these pathogens in dairy herds , and the gaps identified related to the epidemiology and detection of these pathogens in dairy herds supported the development of a project to fill these gaps and to develop new approaches to increase the effectiveness of current surveillance programs applied to dairy herds. Within this framework, a pilot study was designed: to assess the feasibility of new molecular methodologies applied to raw milk filters (RMF) as a way to estimate the presence of these pathogens in the herds and to evaluate the application of the same methodologies to calves’ feces, hypothesizing that these animals could play a role in the spread and maintenance of these pathogens in the herd. to apply the same methods to identify the presence of the “Big 7” serotypes in the different types of matrices. The presence of STEC in calves has been reported in a few studies , but, to the best of our knowledge, this approach was never thought to be a way to identify potential vector animals within and between herds and a potential critical point for control measures. The availability of new commercial molecular assays allows identification of non-O157 serotypes in milk and milk products, simplifying and making the detection process more efficient. However, these methods were not assessed and validated for other biological matrices such as RMF and feces. These validations are pivotal for applying them to a surveillance program based on these matrices. E. coli is a gram-negative bacterium, facultative anaerobe, not sporogenous, belonging to the family Enterobacteriaceae . The majority of animal species’ intestinal flora contain E. coli as the primary facultative anaerobe, which is often free of pathogenicity. However, many strains have evolved pathogenetic processes that enable them to cause a variety of illnesses in both humans and animals, including some extremely serious ones . E. coli can be classified into pathotypes based on their pathogenetic profile, which considers the virulence factors, the diseases caused, and the phylogenetic profile . Among E. coli causing enteric diseases, several pathotypes have been identified, namely intestinal pathogenic E. coli (IPEC), which includes enteropathogenic E. coli (EPEC), enterohemorrhagic E. coli (EHEC), enterotoxigenic E. coli (ETEC), enteroaggregative E. coli (EAEC), diffusely adherent E. coli (DAEC), enteroinvasive E. coli (EIEC), and extraintestinal pathogenic E. coli (ExPEC), which includes uropathogenic E. coli (UPEC), neonatal meningitis E. coli (NMEC), sepsis-associated E. coli (SEPEC), avian pathogenic E. coli (APEC), and mammary pathogenic E. coli (MPEC) . Among the different pathotypes, the group represented by Shiga toxin-producing E. coli (STEC) is of particular interest. This group includes strains that produce at least one member of a class of potent cytotoxins called Shiga toxins. STEC, also called Verotoxin producing E. coli (VTEC), are named after the Shiga toxin (Stx), which is very similar to a cytotoxin produced by Shigella dysenteriae serotype 1 . Among STEC strains, those having particular pathogenicity for humans are often also referred to as enterohemorragic E. coli (EHEC). This pathotype is a zoonotic agent that causes a potentially fatal human illness whose clinical spectrum includes bloody diarrhea, hemorrhagic colitis (HC), and hemolytic uremic syndrome (HUS) . Since 1982, among STEC strains, EHEC has been a major source of food safety concern. The first strain included in this group is E. coli serotype O157: H7, which is still the most widespread EHEC serotype in the United States of America and Europe . E. coli serotype O157:H7 is mostly associated with outbreaks and sporadic cases of HC and HUS in many countries; however, non-O157 STEC have been implicated in outbreaks around the world, and the number of reported cases has steadily increased every year. The Center for Disease Control and Prevention (CDC) has identified six other O groups, besides O157, to be of growing concern for public health and that are responsible for 71% of all illnesses caused by STEC: O26, O45, O103, O111, O121, and O145 (“Big 7”) . The European Food and Safety Authority (EFSA) has identified five serogroups, O26, O103, O111, O145, O157 (“Big five”) , as being of major concern to human health in Europe. Currently, considerable attention is drawn to non-O157 STEC strains, particularly after the occurrence of a severe foodborne outbreak in 2011 in Germany caused by consumption of sprouts contaminated by STEC O104:H4 . Nevertheless, following what has been stated in the 2020 EFSA risk assessment, we must not consider the serogroup and the presence of the eae gene as predictors of pathogenicity and clinical outcomes . Ruminants, especially cattle, are a major reservoir of a diverse group of STEC despite not being a source of diseases for these animals. Indeed, cattle are asymptomatic excretors of STEC, which are permanent or transient members of their normal intestinal flora . Only the gastrointestinal tract of ruminants can be considered as a reservoir for these bacteria. The outbreaks investigated from 1982 to now have highlighted how ruminants, and the bovine species in particular, are almost always involved in the transmission of these bacteria to humans . The persistence of STEC in individual animals is due to the ability of these bacteria to colonize specific portions of the gastrointestinal tract. The different interactions between the microorganism and its host influence the fecal elimination pattern: a low level (<10 3 CFU/g of feces) and short duration (<10 days) elimination occurs when colonization is limited to the rumen; low levels of elimination are also observed when colonization is extended to the cecum and colon but for longer periods (>30 days) . Cattle farming is undoubtedly the major source of environmental contamination from STEC , but the pathogens have also been recovered from pigs, goats, deer, horses, dogs, and birds . Previous studies have demonstrated that STEC infections in humans should not be associated only with cattle spillover since there are several proofs of other sources of contamination, like the outbreak of STEC that occurred in Norfolk (UK), which was related to wild rabbits. The high genetic similarity between STEC strains isolated from domestic pets (dogs and cats) and cattle, and the presence of STEC in wildlife animals like red deer and psittacine birds, highlight the possibility that new reservoirs can enhance human exposition and risk of infection . Nonetheless, outbreaks of STEC are generally ascribed to the consumption of contaminated foods of bovine origin, particularly undercooked ground beef patties and unpasteurized milk. For example, in studies of retail ground beef in North America, the prevalence of STEC ranged from 9% to 36.4%, with E. coli O157 isolated from 0% to 3.7% of the samples tested . Raw milk and raw milk products are among the main food sources of STEC infection in humans; therefore, identification of pathogens at the herd level is of primary importance for public health . Fecal contamination can be considered the only relevant route to explain the presence of STEC in raw milk. Therefore, the key point in the control of these pathogens is the reduction of fecal contamination of milk . The control of the circulation of STEC on the farm is complex and involves herd management as a whole. Currently, only managerial practices aimed at limiting the presence of STEC in milk are proposed. These measures include: the limitation of the circulation of STEC within the individual farm by hygiene measures (e.g., bedding hygiene, water supplies, alleys cleaning) and the minimization of fecal contamination of milk during milking . In order to obtain laboratory confirmation of STEC infection, one of the following requirements needs to be fulfilled, according to the European Centre for Disease Prevention and Control (ECDC): direct detection of the nucleic acid of stx1 or stx2 gene(s) without strain isolation; isolation of nonsorbitol fermenting (NSF) E. coli O157 (without testing for Stx or stx genes); and isolation/cultivation of an E. coli strain that produces Stx or harbors the relative gene(s) . As demonstrated by Dastmalchi et al. and Renter et al. , molecular approaches for STEC identification in feces have only been used after bacterial colony isolation on specific plates (e.g., MacConkey agar) from the aforementioned matrix or after an enrichment step of the matrix; no reports of direct molecular analysis on feces samples have been reported. Indeed, after bacterial isolation, two sets of endpoint PCR or real-time PCR are needed to confirm serotype identification and to evaluate the presence of virulence factors that identify E. coli serotypes as E. coli STEC; the entire procedure is time consuming (55/60 h), even though STEC identification is very precise . Serological methods are commonly used for STEC infection diagnosis; however, even in this case, most of the analysis cannot be performed directly on the sample but requires a prior step of bacterial isolation on agar plates or at least an enrichment step . To date there are several examples of different immunological assays (e.g., traditional ELISA, lateral flow immunoassay, monoclonal antibodies) with common limitations like cross-reaction with other pathogens (i.e., Brucella abortus , Yersinia enterocolitica , Vibrio cholera , Escherichia hermanni , Citrobacter freundii , Citrobacter sedlakii , and Salmonella ) or even viruses like the two cases of norovirus outbreaks in the United States that yielded false positives for STEC infections . DNA-based methods for researching STEC have the advantage of being rapid and do not require special reagents, such as specific Shiga anti-toxin antisera, or essential equipment for the use of cell cultures. There are numerous PCR methods for the search of stx1 and stx2 genes that are capable of detecting all known Shiga toxin subtypes. These tests can be performed both on single bacterial colonies and on mixed cultures, such as enrichment media, or samples such as those shown in . Regarding milk and milk products, the World Health Organization in 2018 summarized the main critical points to be considered when establishing surveillance and control programs related to STEC infections and food contaminations. The report, and consequently most of the European national surveillance programs, focus, apart from raw meat and fresh vegetables, on raw milk and raw milk cheeses ready for retail, neglecting the assessment of the sources of contamination at the start of the dairy chain and, more importantly, not identifying the risk factors for the spread of these pathogens within the herd. This approach is the result of a lack of knowledge (gaps) in preventing the pathogen from spreading in dairy and beef herds at the beginning of the food chain, although the problems associated with STEC/EHEC foodborne disease have been recognized for several years. A recent and useful gap analysis conducted through the Discontools project highlighted the main critical points and gaps that need to be filled regarding STEC surveillance and control. The analysis can be examined on the Discontools website. Among the several issues reported, in our opinion, one of the most important is related to the epidemiologic analysis. In fact, two major gaps in the understanding of STEC epidemiology have been identified: the mechanisms of spreading the infection among herds and how animals are exposed within a farm. Strictly related to this latter issue are gaps in the diagnostic approach. Indeed, new diagnostic approaches and methods are needed to identify mainly non-O157 serotypes in carrier animals and to assess the spread of these serotypes among animals, the contamination of food for human consumption, and human risk related to these foods . The presence of STEC in animals, the severity of the disease in humans, and the role of the environment in maintaining these pathogens support the importance of this group of bacteria in a One Health framework. Moreover, the increasing number of reports on the presence of contaminated foods with STEC serotypes , the probable underestimation of these pathogens in dairy herds , and the gaps identified related to the epidemiology and detection of these pathogens in dairy herds supported the development of a project to fill these gaps and to develop new approaches to increase the effectiveness of current surveillance programs applied to dairy herds. Within this framework, a pilot study was designed: to assess the feasibility of new molecular methodologies applied to raw milk filters (RMF) as a way to estimate the presence of these pathogens in the herds and to evaluate the application of the same methodologies to calves’ feces, hypothesizing that these animals could play a role in the spread and maintenance of these pathogens in the herd. to apply the same methods to identify the presence of the “Big 7” serotypes in the different types of matrices. The presence of STEC in calves has been reported in a few studies , but, to the best of our knowledge, this approach was never thought to be a way to identify potential vector animals within and between herds and a potential critical point for control measures. The availability of new commercial molecular assays allows identification of non-O157 serotypes in milk and milk products, simplifying and making the detection process more efficient. However, these methods were not assessed and validated for other biological matrices such as RMF and feces. These validations are pivotal for applying them to a surveillance program based on these matrices. 2.1. Herds and Animals Bulk tank milk (BTM) and RMF samples were collected from 15 different dairy herds in the Lombardy region, whereas fecal samples were collected from calves belonging to three different dairy herds in the Milano province, also in Lombardy. Samples were divided by the location and the time of sampling. 2.2. Samples Collection Milk and filter samples were collected by technicians of the Regional Breeding Association (ARAL) in different areas of Lombardy during routine sampling for milk quality assessment. In-line filters or RMF, made by non-woven fabric, are components of milking machines aimed to catch debris as well as feces particles. The filters are usually changed before milking. For raw milk analysis, about 25 mL of BTM were sampled at the end of the milking, and RMFs were also taken at the end of milking. For each sampling time, both BTM and RMF were sampled in each herd. Calf feces were collected by herd veterinarians during routine protocols for enteritis prevention. Ten to fifteen g of feces were sampled directly from the rectal ampoule of the animal. Samples were collected in sterile tubes (milk and feces) (VWR international srl, Milano, Italy) and in disposable sterile bags for milk filters. All the samples were immediately frozen (−20 °C), delivered to the laboratories of the Department of Biomedical, Surgical, and Dental Sciences, University of Milan, and kept frozen (−20 °C) until processing. 2.3. Samples Preparation Before the enrichment step, every sample was thawed at room temperature (23 ± 5 °C) inside a laminar flow hood to avoid sample contamination. All samples (raw milk, milk filters, and bovine feces) were prepared for the DNA extraction process following strict sterility procedures to protect the operator from the pathogen and to avoid contaminations that could lead to incorrect results. After thawing, each sample was placed in a Falcon tube (50 mL falcon tube for milk filters and 15 mL falcon tubes for milk and bovine feces) (VWR international srl, Milano, Italy) and enriched with buffered peptone water (BPW) (Biomérieux, Marcy-l’Étoile, France) at a 1:10 ratio, as suggested by the food sample enrichment protocol of real-time PCR producer. The samples were incubated at 37 °C and 5% CO 2 for 24 h and then 1 mL of the enriched sample was transferred into a 1.5 mL tube to proceed with the extraction step or to be stored in a −20 °C freezer. 2.4. DNA Extraction The DNA extraction process has been carried out with the commercial SureTect TM STEC extraction kit (ThermoFisher Scientific, Waltham, MA, USA). Briefly, 10 µL of proteinase K (ThermoFisher Scientific, Waltham, MA, USA) was added to the side of the SureTect Lysis Tube, then 10 µL of diluted sample was added to the bottom of the tube. The tubes were capped and incubated in a thermoblock at 37 °C for 10 min and then at 95 °C for 5 min. After incubation, the supernatant, containing the sample’s DNA, was used to proceed with the Real-Time PCR assay. 2.5. Real-Time PCR Assay 2.5.1. Escherichia coli O157:H7 and STEC Virulence Factors Identification The ThermoFisher Scientific SureTect TM Escherichia coli O157:H7 and STEC Screening PCR Assay (ThermoFisher Scientific, Waltham, MA, USA) is based on TaqMan TM PCR technology. Dye-labeled probes target unique DNA sequences specific to STEC. This assay detects STEC stx , eae genes, and E. coli O157:H7 serotype from food and environmental samples. The molecular designs of the primers and probes of this assay are proprietary and cannot be shown. To perform the assay, 20 µL of the sample processed with the SureTect TM STEC extraction kit (ThermoFisher Scientific, Waltham, MA, USA) were loaded in the PCR tube to resuspend the lyophilized master mix already present in the tube. The tubes were capped with optical cap strips and loaded on the Applied Biosystems TM QuantStudio TM 5 Food Safety System (ThermoFisher Scientific, Waltham, MA, USA) to start the real-time PCR run. The results were analyzed using ThermoFisher Scientific RapidFinder TM Analysis Software v1.1. (ThermoFisher Scientific, Waltham, MA, USA). The PCR running conditions consisted of an initial denaturation at 95 °C for 7 min followed by 50 cycles of denaturation at 95 °C for 5 s, annealing and extension at 60 °C for 45 s. The samples that were positive for at least the stx gene were processed further with the ThermoFisher Scientific SureTect TM E. coli STEC Identification kit (ThermoFisher Scientific, Waltham, MA, USA). 2.5.2. STEC Serotype Identification The ThermoFisher Scientific SureTect TM E. coli STEC (ThermoFisher Scientific, Waltham, MA, USA) Identification commercial kit is used for the rapid qualitative detection of STEC serotypes (O26, O45, O103, O111, O121, O145) from food and environmental safety samples. To perform the assay, 20 µL of the sample processed with the SureTect TM STEC extraction kit (ThermoFisher Scientific, Waltham, MA, USA) were loaded in the PCR tube to resuspend the lyophilized master mix already present in the tube. The tubes were capped with optical cap strips and loaded on the Applied Biosystems TM QuantStudio TM 5 Food Safety System (ThermoFisher Scientific, Waltham, MA, USA) to start the Real Time PCR run. The results were analyzed using ThermoFisher Scientific RapidFinder TM Analysis Software v1.1. (ThermoFisher Scientific, Waltham, MA, USA). The PCR running conditions consisted of an initial denaturation at 95 °C for 7 min followed by 50 cycles of denaturation at 95 °C for 5 s, followed by annealing and extension at 60 °C for 45 s. 2.6. Protocol Validation The diagnostic procedure previously described is aimed at processing food and environmental samples. Therefore, we preliminarily assessed the accuracy of these procedures applied to the different matrices we wanted to investigate (raw milk, raw milk filters, and feces). Following the study of Albonico et al. , we artificially contaminated negative samples of raw milk filters and calf feces with specific STEC serotypes (O157, O26, O45, O103, O111, O121, O145), provided by the European Union Reference Laboratory VTEC (ISS Rome, Italy), to assess the assay sensibility of samples different from food matrices. Each sample was inoculated with 10 or 10 2 CFU of a single E. coli serotype, previously grown on Columbia Blood agar (ThermoFisher Scientific, Waltham, MA, USA), suspended in physiologic solution (0.9% NaCl), diluted to the required concentration, and then treated as described above. The negative samples were also tested without artificial contamination to ensure that these kinds of matrices do not yield false positives with this specific assay. 2.7. Statistical Analysis All data were analyzed using SPSS 28.0.1.1 (IBM Corp., Armonk, NY, USA, 2022) and XLSTAT 2023.1.1 (Lumivero, New York, NY, USA). Statistical association between variables has been determined through χ 2 test and Fisher’s exact test. Bulk tank milk (BTM) and RMF samples were collected from 15 different dairy herds in the Lombardy region, whereas fecal samples were collected from calves belonging to three different dairy herds in the Milano province, also in Lombardy. Samples were divided by the location and the time of sampling. Milk and filter samples were collected by technicians of the Regional Breeding Association (ARAL) in different areas of Lombardy during routine sampling for milk quality assessment. In-line filters or RMF, made by non-woven fabric, are components of milking machines aimed to catch debris as well as feces particles. The filters are usually changed before milking. For raw milk analysis, about 25 mL of BTM were sampled at the end of the milking, and RMFs were also taken at the end of milking. For each sampling time, both BTM and RMF were sampled in each herd. Calf feces were collected by herd veterinarians during routine protocols for enteritis prevention. Ten to fifteen g of feces were sampled directly from the rectal ampoule of the animal. Samples were collected in sterile tubes (milk and feces) (VWR international srl, Milano, Italy) and in disposable sterile bags for milk filters. All the samples were immediately frozen (−20 °C), delivered to the laboratories of the Department of Biomedical, Surgical, and Dental Sciences, University of Milan, and kept frozen (−20 °C) until processing. Before the enrichment step, every sample was thawed at room temperature (23 ± 5 °C) inside a laminar flow hood to avoid sample contamination. All samples (raw milk, milk filters, and bovine feces) were prepared for the DNA extraction process following strict sterility procedures to protect the operator from the pathogen and to avoid contaminations that could lead to incorrect results. After thawing, each sample was placed in a Falcon tube (50 mL falcon tube for milk filters and 15 mL falcon tubes for milk and bovine feces) (VWR international srl, Milano, Italy) and enriched with buffered peptone water (BPW) (Biomérieux, Marcy-l’Étoile, France) at a 1:10 ratio, as suggested by the food sample enrichment protocol of real-time PCR producer. The samples were incubated at 37 °C and 5% CO 2 for 24 h and then 1 mL of the enriched sample was transferred into a 1.5 mL tube to proceed with the extraction step or to be stored in a −20 °C freezer. The DNA extraction process has been carried out with the commercial SureTect TM STEC extraction kit (ThermoFisher Scientific, Waltham, MA, USA). Briefly, 10 µL of proteinase K (ThermoFisher Scientific, Waltham, MA, USA) was added to the side of the SureTect Lysis Tube, then 10 µL of diluted sample was added to the bottom of the tube. The tubes were capped and incubated in a thermoblock at 37 °C for 10 min and then at 95 °C for 5 min. After incubation, the supernatant, containing the sample’s DNA, was used to proceed with the Real-Time PCR assay. 2.5.1. Escherichia coli O157:H7 and STEC Virulence Factors Identification The ThermoFisher Scientific SureTect TM Escherichia coli O157:H7 and STEC Screening PCR Assay (ThermoFisher Scientific, Waltham, MA, USA) is based on TaqMan TM PCR technology. Dye-labeled probes target unique DNA sequences specific to STEC. This assay detects STEC stx , eae genes, and E. coli O157:H7 serotype from food and environmental samples. The molecular designs of the primers and probes of this assay are proprietary and cannot be shown. To perform the assay, 20 µL of the sample processed with the SureTect TM STEC extraction kit (ThermoFisher Scientific, Waltham, MA, USA) were loaded in the PCR tube to resuspend the lyophilized master mix already present in the tube. The tubes were capped with optical cap strips and loaded on the Applied Biosystems TM QuantStudio TM 5 Food Safety System (ThermoFisher Scientific, Waltham, MA, USA) to start the real-time PCR run. The results were analyzed using ThermoFisher Scientific RapidFinder TM Analysis Software v1.1. (ThermoFisher Scientific, Waltham, MA, USA). The PCR running conditions consisted of an initial denaturation at 95 °C for 7 min followed by 50 cycles of denaturation at 95 °C for 5 s, annealing and extension at 60 °C for 45 s. The samples that were positive for at least the stx gene were processed further with the ThermoFisher Scientific SureTect TM E. coli STEC Identification kit (ThermoFisher Scientific, Waltham, MA, USA). 2.5.2. STEC Serotype Identification The ThermoFisher Scientific SureTect TM E. coli STEC (ThermoFisher Scientific, Waltham, MA, USA) Identification commercial kit is used for the rapid qualitative detection of STEC serotypes (O26, O45, O103, O111, O121, O145) from food and environmental safety samples. To perform the assay, 20 µL of the sample processed with the SureTect TM STEC extraction kit (ThermoFisher Scientific, Waltham, MA, USA) were loaded in the PCR tube to resuspend the lyophilized master mix already present in the tube. The tubes were capped with optical cap strips and loaded on the Applied Biosystems TM QuantStudio TM 5 Food Safety System (ThermoFisher Scientific, Waltham, MA, USA) to start the Real Time PCR run. The results were analyzed using ThermoFisher Scientific RapidFinder TM Analysis Software v1.1. (ThermoFisher Scientific, Waltham, MA, USA). The PCR running conditions consisted of an initial denaturation at 95 °C for 7 min followed by 50 cycles of denaturation at 95 °C for 5 s, followed by annealing and extension at 60 °C for 45 s. Escherichia coli O157:H7 and STEC Virulence Factors Identification The ThermoFisher Scientific SureTect TM Escherichia coli O157:H7 and STEC Screening PCR Assay (ThermoFisher Scientific, Waltham, MA, USA) is based on TaqMan TM PCR technology. Dye-labeled probes target unique DNA sequences specific to STEC. This assay detects STEC stx , eae genes, and E. coli O157:H7 serotype from food and environmental samples. The molecular designs of the primers and probes of this assay are proprietary and cannot be shown. To perform the assay, 20 µL of the sample processed with the SureTect TM STEC extraction kit (ThermoFisher Scientific, Waltham, MA, USA) were loaded in the PCR tube to resuspend the lyophilized master mix already present in the tube. The tubes were capped with optical cap strips and loaded on the Applied Biosystems TM QuantStudio TM 5 Food Safety System (ThermoFisher Scientific, Waltham, MA, USA) to start the real-time PCR run. The results were analyzed using ThermoFisher Scientific RapidFinder TM Analysis Software v1.1. (ThermoFisher Scientific, Waltham, MA, USA). The PCR running conditions consisted of an initial denaturation at 95 °C for 7 min followed by 50 cycles of denaturation at 95 °C for 5 s, annealing and extension at 60 °C for 45 s. The samples that were positive for at least the stx gene were processed further with the ThermoFisher Scientific SureTect TM E. coli STEC Identification kit (ThermoFisher Scientific, Waltham, MA, USA). The ThermoFisher Scientific SureTect TM E. coli STEC (ThermoFisher Scientific, Waltham, MA, USA) Identification commercial kit is used for the rapid qualitative detection of STEC serotypes (O26, O45, O103, O111, O121, O145) from food and environmental safety samples. To perform the assay, 20 µL of the sample processed with the SureTect TM STEC extraction kit (ThermoFisher Scientific, Waltham, MA, USA) were loaded in the PCR tube to resuspend the lyophilized master mix already present in the tube. The tubes were capped with optical cap strips and loaded on the Applied Biosystems TM QuantStudio TM 5 Food Safety System (ThermoFisher Scientific, Waltham, MA, USA) to start the Real Time PCR run. The results were analyzed using ThermoFisher Scientific RapidFinder TM Analysis Software v1.1. (ThermoFisher Scientific, Waltham, MA, USA). The PCR running conditions consisted of an initial denaturation at 95 °C for 7 min followed by 50 cycles of denaturation at 95 °C for 5 s, followed by annealing and extension at 60 °C for 45 s. The diagnostic procedure previously described is aimed at processing food and environmental samples. Therefore, we preliminarily assessed the accuracy of these procedures applied to the different matrices we wanted to investigate (raw milk, raw milk filters, and feces). Following the study of Albonico et al. , we artificially contaminated negative samples of raw milk filters and calf feces with specific STEC serotypes (O157, O26, O45, O103, O111, O121, O145), provided by the European Union Reference Laboratory VTEC (ISS Rome, Italy), to assess the assay sensibility of samples different from food matrices. Each sample was inoculated with 10 or 10 2 CFU of a single E. coli serotype, previously grown on Columbia Blood agar (ThermoFisher Scientific, Waltham, MA, USA), suspended in physiologic solution (0.9% NaCl), diluted to the required concentration, and then treated as described above. The negative samples were also tested without artificial contamination to ensure that these kinds of matrices do not yield false positives with this specific assay. All data were analyzed using SPSS 28.0.1.1 (IBM Corp., Armonk, NY, USA, 2022) and XLSTAT 2023.1.1 (Lumivero, New York, NY, USA). Statistical association between variables has been determined through χ 2 test and Fisher’s exact test. 3.1. Protocol Validation The negative controls tested negative for all the target genes in the assay and all the contaminated controls tested positive for the expected virulence and serotype genes at both 10 and 10 2 CFU inoculum concentrations, enabling us to proceed with the unknown BTM, RMF, and feces. 3.2. Data Description A total of 290 samples from 18 different dairy herds were collected and analyzed from January to December 2022 . Of these, 88 were BTM, 104 were RMF, and 98 were calves’ feces samples. Samples have been considered as positive following the principle of maximum precaution and the criterion of direct detection of the nucleic acid of stx1 or stx2 gene(s) without strain isolation . All raw results of real-time PCR are provided in . 3.3. STEC Virulence Factor Identification Regarding virulence gene identification, we found three BTM samples positive for the stx gene, 10 for the eae gene, and no sample positive for both genes. When we considered RMF, a total of six samples tested positive for stx presence, 25 for eae presence, and 37 for the presence of both genes . When fecal samples were considered, 72 samples were positive for the stx gene, 84 for the eae gene, and 71 for both genes. For pre-weaning samples, one sample was positive for the stx gene, 13 for the eae gene, and 71 for both genes; for post-weaning samples, no samples were positive for stx or eae gene and 35 were positive for both genes. These results are summarized in . The comparison between the distributions of virulence genes in RMF and fecal samples is reported in , and the statistical analysis reported in showed a statistical difference (α = 0.05) between them, mainly due to a frequency higher than expected in stx + eae positive fecal samples and, conversely, a lower than expected frequency in RMF samples. reports the comparison between the distributions of virulence genes in fecal samples taken before and after weaning, and reports the results of the statistical analysis. The comparison between the distributions of virulence genes in pre- and post-weaning fecal samples showed a statistical difference (α = 0.05) between them, mainly due to a frequency higher than expected in stx + eae positives in post-weaning fecal samples as well as a lower than expected frequency of eae positive samples. As expected, the pattern was reversed in pre-weaning samples. 3.4. STEC Serotype Identification A total of 83 (70.3%) of 118 samples positive for the stx gene were also positive for at least one STEC serotype. None of the serotypes included in the “Big 7” panel were found in BTM samples, whereas in RMF O157 ( n = 1), O26 ( n = 17), O45/O121 ( n = 7), O103 ( n = 11), O111 ( n = 3), and O145 ( n = 3) were identified. Thirteen stx -positive samples from RMF were negative for serotype identification. These results are summarized in and . The serotype identification of fecal samples led to the identification of O157 ( n = 9), O26 ( n = 29), O45/O121 ( n = 32), O103 ( n = 14), O111 ( n = 1), and O145 ( n = 4) serotypes, while for 19 stx positive samples, the serotype was not identified . All the serotypes identified and classified by type of matrix are reported in to visualize relative abundance. The comparison of the serotype distributions ( and ) between RMF and feces showed a single significant result with a lower-than-expected frequency of O45/O121 in RMF samples, and the opposite in feces. When pre- and post-weaning distributions were analyzed, we did not find any statistically significant difference at Fisher’s exact test (α = 0.05). The single percentages do not add up to 100 because some samples were positive for more than one STEC serotype. The negative controls tested negative for all the target genes in the assay and all the contaminated controls tested positive for the expected virulence and serotype genes at both 10 and 10 2 CFU inoculum concentrations, enabling us to proceed with the unknown BTM, RMF, and feces. A total of 290 samples from 18 different dairy herds were collected and analyzed from January to December 2022 . Of these, 88 were BTM, 104 were RMF, and 98 were calves’ feces samples. Samples have been considered as positive following the principle of maximum precaution and the criterion of direct detection of the nucleic acid of stx1 or stx2 gene(s) without strain isolation . All raw results of real-time PCR are provided in . Regarding virulence gene identification, we found three BTM samples positive for the stx gene, 10 for the eae gene, and no sample positive for both genes. When we considered RMF, a total of six samples tested positive for stx presence, 25 for eae presence, and 37 for the presence of both genes . When fecal samples were considered, 72 samples were positive for the stx gene, 84 for the eae gene, and 71 for both genes. For pre-weaning samples, one sample was positive for the stx gene, 13 for the eae gene, and 71 for both genes; for post-weaning samples, no samples were positive for stx or eae gene and 35 were positive for both genes. These results are summarized in . The comparison between the distributions of virulence genes in RMF and fecal samples is reported in , and the statistical analysis reported in showed a statistical difference (α = 0.05) between them, mainly due to a frequency higher than expected in stx + eae positive fecal samples and, conversely, a lower than expected frequency in RMF samples. reports the comparison between the distributions of virulence genes in fecal samples taken before and after weaning, and reports the results of the statistical analysis. The comparison between the distributions of virulence genes in pre- and post-weaning fecal samples showed a statistical difference (α = 0.05) between them, mainly due to a frequency higher than expected in stx + eae positives in post-weaning fecal samples as well as a lower than expected frequency of eae positive samples. As expected, the pattern was reversed in pre-weaning samples. A total of 83 (70.3%) of 118 samples positive for the stx gene were also positive for at least one STEC serotype. None of the serotypes included in the “Big 7” panel were found in BTM samples, whereas in RMF O157 ( n = 1), O26 ( n = 17), O45/O121 ( n = 7), O103 ( n = 11), O111 ( n = 3), and O145 ( n = 3) were identified. Thirteen stx -positive samples from RMF were negative for serotype identification. These results are summarized in and . The serotype identification of fecal samples led to the identification of O157 ( n = 9), O26 ( n = 29), O45/O121 ( n = 32), O103 ( n = 14), O111 ( n = 1), and O145 ( n = 4) serotypes, while for 19 stx positive samples, the serotype was not identified . All the serotypes identified and classified by type of matrix are reported in to visualize relative abundance. The comparison of the serotype distributions ( and ) between RMF and feces showed a single significant result with a lower-than-expected frequency of O45/O121 in RMF samples, and the opposite in feces. When pre- and post-weaning distributions were analyzed, we did not find any statistically significant difference at Fisher’s exact test (α = 0.05). The single percentages do not add up to 100 because some samples were positive for more than one STEC serotype. STEC represent a serious threat to public health and require an efficient surveillance program to prevent outbreaks in humans. Currently, the only extant surveillance program in Italy, as well as in other countries (e.g., France), involves analysis performed on food (i.e., raw milk, dairy products, raw beef, vegetables) without considering the epidemiologic situation at the herd level, and this program has many flaws regarding methodological and regulatory problems like the primers for molecular identification of the stx gene or the definition of a positive sample. The evidence of several gaps in the current knowledge on the epidemiology of the pathogen, particularly concerning the spread of the infection within the herd, and, therefore, in the surveillance approach at the herd level, support studies aiming to increase our knowledge on this problem. It is important to highlight that future surveillance programs should focus more on virulence factors rather than serotypes since the risk for human illness is mostly related to the stx gene. Serotype information is still of value, especially if backed up by a full genomic analysis, since it could lead to a better understanding of the strains’ phylogeny and route of transmission. The pilot study described in this paper aimed to apply the current available diagnostic kit in matrices different from milk and milk products and to verify if their application may be helpful in the diagnosis of STEC at the herd level (calf feces and RMF). The results confirmed that the commercial kit may be applied to RMF and feces as well as to the target matrices (milk and milk products). 4.1. STEC Frequency in Different Matrices The results of the analysis of BTM, RMF, and calf feces showed a different epidemiological pattern related to the matrix. Indeed, the proportion of positive samples in BTM was very low (3.4% of stx -positive samples), whereas the proportion on the RMF of the same herds was higher (41.3% of stx -positive samples), as well as in calf feces. Regarding the comparison between RMF and BTM, milk filters were already shown to be a more useful matrix compared to raw milk to identify herd pathogens. Indeed, despite the dimension of the pores in RMF (100–150 μm), they are too big to prevent bacteria from being completely retained by the filter. Previous studies showed their usefulness in identifying pathogens Therefore, these differences were expected and may be explained as follows: Milk: 25 mL of raw milk are sampled from the bulk tank, which is capable of holding 150–10,000 liters of milk at 4 °C, which results in a poor detection level, particularly when the prevalence of STEC-positive cows is very low and/or when milking practices are optimal. Milk Filters: with this type of sample, it is easier to find positivity because the main task of the filter is to block and retain any type of fecal or litter debris coming from the milking routine, and all the milk passes through the filter; therefore, there is no dilution effect. These results support the importance of selecting a proper matrix to monitor the presence of the pathogen at the herd level, like the RMF. Indeed, sampling BTM could lead to an underestimation of the prevalence of this pathogen, and for this reason we suggest that this matrix is not the most appropriate to monitor the presence of STEC at the herd level. Nevertheless, further studies are needed to confirm the use of RFM within a surveillance program for STEC in milk and milk products. The presence of positive STEC feces in both pre- and post-weaning calves suggests that the infection can be transmitted from positive cows to calves either during calving or by contaminated colostrum and milk. The observed significantly higher-than-expected frequency of stx + eae genes compared with the RMF frequency supports this hypothesis. The evidence of these ways of transmission also suggests a potential preventive measure based on the use of stored STEC negative milk and colostrum as applied to prevent paratuberculosis transmission . However, how the pathogens enter the herd remains to be elucidated. The most probable ways are the purchase of infected animals (calf, heifer, or cow) and the presence of the pathogens in the environment, such as in water (pools, wells) or fresh forage. 4.2. Distribution of Serotypes The distribution of serotypes among the different samples showed only one significant difference, represented by O45/O121 having a higher-than-expected frequency of these serotypes in feces when compared with RMF. Another result worth mentioning is the low prevalence of O157 isolates representing, overall, less than 20% of the serotypes. Moreover, the most important information arising from these analyses, in our opinion, is represented by the fact that 33 out of 113 (29.2%) samples tested positive for the stx virulence gene but negative for the identification of the “Big 7” serotypes. The current monitoring programs usually consider only the “big five” or the “big seven” without considering other STEC serotypes that may represent a growing threat to public health . Following these results, a plausible idea would be to include other common STEC serotypes in the monitoring program, similar to what has been conducted in the work of Capps et al., in order to screen the samples for the other six most common STEC non-top seven serotypes (O2, O74, O109, O131, O168, and O171) and evaluate the prevalence and the resulting burden on public health of these serotypes . The results of the analysis of BTM, RMF, and calf feces showed a different epidemiological pattern related to the matrix. Indeed, the proportion of positive samples in BTM was very low (3.4% of stx -positive samples), whereas the proportion on the RMF of the same herds was higher (41.3% of stx -positive samples), as well as in calf feces. Regarding the comparison between RMF and BTM, milk filters were already shown to be a more useful matrix compared to raw milk to identify herd pathogens. Indeed, despite the dimension of the pores in RMF (100–150 μm), they are too big to prevent bacteria from being completely retained by the filter. Previous studies showed their usefulness in identifying pathogens Therefore, these differences were expected and may be explained as follows: Milk: 25 mL of raw milk are sampled from the bulk tank, which is capable of holding 150–10,000 liters of milk at 4 °C, which results in a poor detection level, particularly when the prevalence of STEC-positive cows is very low and/or when milking practices are optimal. Milk Filters: with this type of sample, it is easier to find positivity because the main task of the filter is to block and retain any type of fecal or litter debris coming from the milking routine, and all the milk passes through the filter; therefore, there is no dilution effect. These results support the importance of selecting a proper matrix to monitor the presence of the pathogen at the herd level, like the RMF. Indeed, sampling BTM could lead to an underestimation of the prevalence of this pathogen, and for this reason we suggest that this matrix is not the most appropriate to monitor the presence of STEC at the herd level. Nevertheless, further studies are needed to confirm the use of RFM within a surveillance program for STEC in milk and milk products. The presence of positive STEC feces in both pre- and post-weaning calves suggests that the infection can be transmitted from positive cows to calves either during calving or by contaminated colostrum and milk. The observed significantly higher-than-expected frequency of stx + eae genes compared with the RMF frequency supports this hypothesis. The evidence of these ways of transmission also suggests a potential preventive measure based on the use of stored STEC negative milk and colostrum as applied to prevent paratuberculosis transmission . However, how the pathogens enter the herd remains to be elucidated. The most probable ways are the purchase of infected animals (calf, heifer, or cow) and the presence of the pathogens in the environment, such as in water (pools, wells) or fresh forage. The distribution of serotypes among the different samples showed only one significant difference, represented by O45/O121 having a higher-than-expected frequency of these serotypes in feces when compared with RMF. Another result worth mentioning is the low prevalence of O157 isolates representing, overall, less than 20% of the serotypes. Moreover, the most important information arising from these analyses, in our opinion, is represented by the fact that 33 out of 113 (29.2%) samples tested positive for the stx virulence gene but negative for the identification of the “Big 7” serotypes. The current monitoring programs usually consider only the “big five” or the “big seven” without considering other STEC serotypes that may represent a growing threat to public health . Following these results, a plausible idea would be to include other common STEC serotypes in the monitoring program, similar to what has been conducted in the work of Capps et al., in order to screen the samples for the other six most common STEC non-top seven serotypes (O2, O74, O109, O131, O168, and O171) and evaluate the prevalence and the resulting burden on public health of these serotypes . The results obtained further confirmed the usefulness of RMF analysis for the detection of STEC at the herd level. Thus far, our results support the hypothesis that STEC prevalence at the herd level is highly underestimated, and that the surveillance program needs critical and extensive improvements to be more efficient in detecting and preventing STEC infections. Moreover, the presence of STEC in most calf fecal samples and the correspondence between serotype in RMF and feces support the hypothesis of a role of calves in maintaining the infection within the herd. This study aimed to gather information useful to fill the knowledge gap as a preliminary step before designing a proper epidemiological investigation to confirm the role of calves in the epidemiology of these infections. The results of this pilot study also suggest that prevention at the calves’ level may be considered to reduce the risk of spreading the infection within the herd and will support further research projects investigating this aspect of the STEC transmission chain. The epidemiology of these infections and the characteristics of the pathogens clearly show how a One Health approach will be pivotal in improving our capabilities to control the spread of these infections. Finally, more data regarding serotypes and stx subtypes should be gathered and made available by the scientific community to better understand the transmission routes of this pathogen and to estimate the risk for severe human illnesses.
Only eye study 3 (OnES 3): a qualitative study into how surgeons approach surgery in patients with only one seeing eye
9bffb989-a4e1-4111-8b6f-d3a3f79963d2
9748952
Ophthalmology[mh]
Although there is no standardised definition for an ‘only eye’ patient, it typically refers to the situation where their contralateral eye fulfils the criteria for legal registration as severely sight impaired based on visual acuity or visual field loss. It could also apply to situations where a patient feels that their better seeing eye is the only one with functional vision, and loss of this ‘only eye’ would have life-changing consequences (including loss of independence and need for significant social care). Patients with an only eye are not uncommon in ophthalmology clinics. They are a heterogenous group that may have suffered irreversible vision loss in their fellow eye from a variety of disorders, including advanced disease, trauma, severe amblyopia or surgical complication. Performing surgery on an only eye is considered high-stakes surgery because of the potential life-changing consequences of failure or complication. To date, there has been limited research into the experience of only eye surgery from the surgeon’s perspective. A recent study by Jones et al identified, through qualitative in-depth semistructured interviews of ten ophthalmic surgeons, several key differences when it comes to operating on monocular versus binocular patients. These included differences in the consent process, implementation of extra risk reduction strategies and the psychological burdens experienced by the surgeon and/or assisting staff, particularly if there is an unsuccessful outcome. Good mentorship and training were considered important factors that help equip surgeons to perform only eye surgery. This study seeks to build on this research by analysing the only eye surgery experience of a large cohort of practicing ophthalmic surgeons. Currently, to the best of our knowledge, no guidelines specific to managing only eye surgery patients have been published by any of the major professional ophthalmology bodies worldwide. This study also aims to provide the foundation for their development. Sampling and recruitment A focus group study design was chosen to understand the experience of only eye surgery from a surgeon’s perspective. The study sample of Australian ophthalmologists were recruited via an ophthalmology professional development webinar hosted in 2019 by one of the authors (GL) over three identically run sessions on different dates. There was a combined total of 76 attendees who were evenly divided across these sessions. This professional development event is hosted annually and is generally attended by a diverse group of Australian ophthalmologists. A different topic is chosen as the focus of the event each year, and for the 2019 edition this was ‘only eye surgery’. At each session, the host explained the nature of the study to attendees and invited them to participate on a voluntary basis. Informed consent was obtained from all participants. There were no attendees who declined to participate. Through this approach three piggyback focus groups, one from each session, were created. Thus each focus group consisted of approximately 25 surgeons who were randomly allocated. These large focus groups were considered the most effective approach to engage participants within the context of a busy online webinar, which created a highly dynamic environment to stimulate discussion. Data collection For each focus group, the host (GL) facilitated a discussion about only eye surgery. As an Australian ophthalmologist themselves, the facilitator had a professional relationship with some of the participants. A topic guide was loosely followed, and participants were encouraged to comment freely about their own experience with only eye surgery. Participants were able to speak through their computer microphone or type via a live text box. The content of these discussions was recorded in a secure and de-identified fashion for later analysis. Data analysis Qualitative data from these focus group discussions, in audio and text form, were transcribed verbatim into the NVivo (QSR International, Cambridge Massachusetts, USA) software program for thematic analysis. An inductive experiential realist approach using open coding was used to analyse patterns in participants’ responses. This approach to analysis allowed the research team to identify patterns across participants’ responses to generate themes from the bottom-up. This more flexible approach to analysis was considered most suitable given the limited prior research and theory on the topic of only eye surgery. Transcript data were read and reread by one of the researchers (JPW), who did not have a relationship with any of the participants, and preliminary codes were developed based on impressions of recurring themes. Following this, interpretations of the coding and themes were discussed among the entire research team. Once agreement on the suitability of coding choices was reached, the coding framework was refined and key themes were finalised. The Standards for Reporting Qualitative Research (SRQR) were used for this qualitative study. Patient and public involvement This report is part of a wider programme of research investigating only eye surgery ‘the Only Eye Study (OnES)’. Consultation with patients and professionals during the planning of this study led to the development of the research agenda and informed the design of the interview topic guide used in this study. A focus group study design was chosen to understand the experience of only eye surgery from a surgeon’s perspective. The study sample of Australian ophthalmologists were recruited via an ophthalmology professional development webinar hosted in 2019 by one of the authors (GL) over three identically run sessions on different dates. There was a combined total of 76 attendees who were evenly divided across these sessions. This professional development event is hosted annually and is generally attended by a diverse group of Australian ophthalmologists. A different topic is chosen as the focus of the event each year, and for the 2019 edition this was ‘only eye surgery’. At each session, the host explained the nature of the study to attendees and invited them to participate on a voluntary basis. Informed consent was obtained from all participants. There were no attendees who declined to participate. Through this approach three piggyback focus groups, one from each session, were created. Thus each focus group consisted of approximately 25 surgeons who were randomly allocated. These large focus groups were considered the most effective approach to engage participants within the context of a busy online webinar, which created a highly dynamic environment to stimulate discussion. For each focus group, the host (GL) facilitated a discussion about only eye surgery. As an Australian ophthalmologist themselves, the facilitator had a professional relationship with some of the participants. A topic guide was loosely followed, and participants were encouraged to comment freely about their own experience with only eye surgery. Participants were able to speak through their computer microphone or type via a live text box. The content of these discussions was recorded in a secure and de-identified fashion for later analysis. Qualitative data from these focus group discussions, in audio and text form, were transcribed verbatim into the NVivo (QSR International, Cambridge Massachusetts, USA) software program for thematic analysis. An inductive experiential realist approach using open coding was used to analyse patterns in participants’ responses. This approach to analysis allowed the research team to identify patterns across participants’ responses to generate themes from the bottom-up. This more flexible approach to analysis was considered most suitable given the limited prior research and theory on the topic of only eye surgery. Transcript data were read and reread by one of the researchers (JPW), who did not have a relationship with any of the participants, and preliminary codes were developed based on impressions of recurring themes. Following this, interpretations of the coding and themes were discussed among the entire research team. Once agreement on the suitability of coding choices was reached, the coding framework was refined and key themes were finalised. The Standards for Reporting Qualitative Research (SRQR) were used for this qualitative study. This report is part of a wider programme of research investigating only eye surgery ‘the Only Eye Study (OnES)’. Consultation with patients and professionals during the planning of this study led to the development of the research agenda and informed the design of the interview topic guide used in this study. A total of 76 consultant ophthalmic surgeons participated in the study. Their characteristics are summarised in . Five overarching themes specific to only eye surgery were identified, with one divergent theme also identified: Differences in the surgical decision-making process Participants were consistent in their agreement that the decision to operate on an only eye was a different proposition compared with a patient with two functioning eyes. Participants were highly conscious of patient concerns when it came to considering surgery on their only eye. They remarked how this could affect the patient’s willingness to proceed with such surgery. It is important to acknowledge anxiety about surgery for both the patient and to some degree for the doctor undertaking the responsibility of only eye surgery. (S9) Because of the high-stakes nature of only eye surgery, participants tended to adopt a higher threshold when deciding to operate on an only eye. I would only recommend only eye surgery if the patient had deteriorating quality of life related to poor vision, there were no other options and there was a low risk of complications. (S132) In certain situations, participants found that getting a second opinion was helpful, especially if they sensed the patient had any doubts about the surgery. If an only eye patient required a surgery the surgeon felt less familiar with performing, they might refer these patients to a more experienced colleague. Differences in the approach to consent During consent discussions, participants emphasised the importance of clearly communicating the risks of performing surgery on a patient’s only eye, and what this meant in the context of their only eye status. I use the phrase "you are at no higher risk of a complication having only one eye, but the implication of a complication may have a more significant impact on you and your lifestyle”. (S8) Participants noted that this often required longer or multiple consultations. The concept of a ‘cool off’ period was mentioned, where another appointment is routinely arranged before signing consent so that the patient has time to consider the information before signing the document. Always more care is taken with patient explanation, specific and general risks of the surgery. (S45) I spend much more time explaining the process of the surgery and how we manage the post-operative period so that the patient and carers are completely aware of what to expect. (S67) Many participants considered the presence of a support person (eg, family member) during the consent process as critical. Some would even refuse to book surgery until a support person had been involved. The importance of establishing a trusting doctor–patient relationship with these patients was identified. I usually know these patients well before arranging surgery and have a good doctor-patient relationship with them, which renders preoperative consultation and operative planning easier (S64) Implementation of additional risk reduction strategies Many participants agreed that they implement additional strategies to try and reduce risk when performing only eye surgery. Preoperatively, steps to minimise risk included mental preparation, putting only eye patients at an optimal position on the surgical list, and ensuring the availability of good instruments and a good surgical team, including experienced anaesthetic and scrub-nurse colleagues. Postoperatively, risk mitigating strategies included safely managing low vision in the postoperative period, close follow-up, adequate patient education and easy access channels for them to seek assistance if any concerns. Preoperative Some participants reported employing mental techniques to prepare for only eye surgery. An example provided was task visualisation, a technique in which the surgeon uses their imagination to mentally rehearse the procedure beforehand. Choice of patient position on the surgical list was deemed important by several participants. Exactly which spot was considered the best did vary, however, there seemed to be strong agreement that first on the list should be avoided to allow ‘warm up’. One participant suggested leaving the most surgically complicated cases until last to avoid any time constraints but did not limit this to only eye cases. Several participants reported they would personally ensure every piece of equipment that might be required for the surgery was available for their only eye cases. I make sure that I have everything in theatre and nurses don't have to go looking for things, just in case I need something. (S39) Many participants considered it important to have a well-trained and experienced surgical team on the day. A consistent and familiar team helped some of the participants feel more comfortable about the surgery. I think in these ones you really need to be 100 percent sure that your team’s good … I will only do only eyes at the place (hospital) where I get most of my regular eye team. (S63) It was the experience of some participants that clearly communicating a patient’s only eye status with the theatre staff in advance may help ensure the surgeon had the best personnel assisting on the day. Postoperative Participants described strategies to maximise patient safety in the immediate postoperative period, where there may be an increased risk of injury or falls in the context of reduced vision. Many participants preferred topical or general anaesthetic for cataract surgery in only eye patients, so that the patient could see immediately after surgery. If a local anaesthetic block was used, short-acting anaesthetic agents were preferred over long-acting agents to avoid an extended period of reduced vision. Covering the patient’s eye at completion of the surgery, in a manner that would obscure their vision, was avoided if possible. Only eye patients would be kept in the recovery bay for a longer period of observation until they could demonstrate an ability to navigate safely. They would only be discharged in the presence of a support person, or if necessary, admitted to hospital overnight. Some participants recommended closer postoperative follow-up for all only eye patients, such that if there were any complications, they would be alerted to them at an earlier stage. Others were comfortable with routine follow-up, provided the surgery had gone well. It was noted that only eye patients are often very keen to be reviewed more frequently. Participants ensured their only eye patients were adequately educated regarding what to expect postoperatively and had a direct access channel to seek assistance if they encountered any problems. Many would provide their personal phone number or the details of the local eye hospital emergency department. Value of having colleagues to discuss and plan surgery with Many participants agreed that it was beneficial to have mentors or experienced colleagues that they could talk to about difficult only eye cases. This could be during the decision-making process on whether to operate on an only eye. Find it particularly useful when deciding re filtering surgery in progressive glaucoma where pressures are normal. (S3) Or it could be for advice prior to performing a less familiar procedure. Sometimes you're forced to do an unusual procedure in these extreme cases and well, you know, that’s when you consult as widely as you can with your colleagues. (S54) The mere act of involving another colleague also provided a degree of psychological reassurance for some participants. A problem shared is a problem halved (S3) Psychological challenges Participants acknowledged that as the surgeon they carried ultimate responsibility for their patients, and they felt an additional weight of responsibility for their only eye patients. There is always a feeling of greater responsibility in operating on these patients. (S165) I often feel me as the surgeon carries the burden. (S1) Some participants revealed that they experience stress in the lead up to difficult only eye surgery. I don’t sleep well before doing a complex only eye e.g, small pupil, shallow anterior chamber. (S33) Other participants acknowledged the stress caused by only eye surgery but did not necessarily perceive it in a negative way. When things start to go pear-shaped, it’s how you interpret your physiological responses. If you interpret the tachycardia as fear, you are in trouble. But if you interpret it as your body preparing you to meet the challenge you will be well placed to deal with it. (S44) Some participants found that meditation techniques, such as mindfulness, helped them manage their stress. Participants were acutely aware of how an adverse surgical outcome could have catastrophic consequences for an only eye patient. Losing an eye from surgery is devastating, but to realise you have blinded somebody that is, made them dependent for the rest of their lives is a very heavy blow to your psyche. (S1) Participants described how a negative past experience, such as an adverse surgical outcome in an only eye patient that had occurred under their care, could still affect them psychologically. All surgery, only eye or not, should be treated the same A divergent theme emerged within the data that appeared to disagree with the sentiment that only eye surgery should be approached differently. Some participants argued that all eyes were equally important, and that any approach that potentially led to safer outcomes for only eye patients should be applied to all patients. Should we not be treating every eye surgery as if it is the only eye always? (S39) Each eye in every person is important and deserves full attention, whether or not it’s the only one remaining for that particular individual. (S146) Some participants acknowledged the additional psychological burdens that could arise when performing only eye surgery but believed that devoting too much attention to this was more of a hinderance than help. These participants implicitly agreed that only eye surgery was different, but despite this, surgeons should try and keep their approach the same. Maybe we are making this more stressful than it should be … I agree that I would never sleep if I thought of the consequences to my patient’s life if it went wrong. You do the best for all your patients each time you operate without putting extra stress on yourself and staff? (S12) Participants were consistent in their agreement that the decision to operate on an only eye was a different proposition compared with a patient with two functioning eyes. Participants were highly conscious of patient concerns when it came to considering surgery on their only eye. They remarked how this could affect the patient’s willingness to proceed with such surgery. It is important to acknowledge anxiety about surgery for both the patient and to some degree for the doctor undertaking the responsibility of only eye surgery. (S9) Because of the high-stakes nature of only eye surgery, participants tended to adopt a higher threshold when deciding to operate on an only eye. I would only recommend only eye surgery if the patient had deteriorating quality of life related to poor vision, there were no other options and there was a low risk of complications. (S132) In certain situations, participants found that getting a second opinion was helpful, especially if they sensed the patient had any doubts about the surgery. If an only eye patient required a surgery the surgeon felt less familiar with performing, they might refer these patients to a more experienced colleague. During consent discussions, participants emphasised the importance of clearly communicating the risks of performing surgery on a patient’s only eye, and what this meant in the context of their only eye status. I use the phrase "you are at no higher risk of a complication having only one eye, but the implication of a complication may have a more significant impact on you and your lifestyle”. (S8) Participants noted that this often required longer or multiple consultations. The concept of a ‘cool off’ period was mentioned, where another appointment is routinely arranged before signing consent so that the patient has time to consider the information before signing the document. Always more care is taken with patient explanation, specific and general risks of the surgery. (S45) I spend much more time explaining the process of the surgery and how we manage the post-operative period so that the patient and carers are completely aware of what to expect. (S67) Many participants considered the presence of a support person (eg, family member) during the consent process as critical. Some would even refuse to book surgery until a support person had been involved. The importance of establishing a trusting doctor–patient relationship with these patients was identified. I usually know these patients well before arranging surgery and have a good doctor-patient relationship with them, which renders preoperative consultation and operative planning easier (S64) Many participants agreed that they implement additional strategies to try and reduce risk when performing only eye surgery. Preoperatively, steps to minimise risk included mental preparation, putting only eye patients at an optimal position on the surgical list, and ensuring the availability of good instruments and a good surgical team, including experienced anaesthetic and scrub-nurse colleagues. Postoperatively, risk mitigating strategies included safely managing low vision in the postoperative period, close follow-up, adequate patient education and easy access channels for them to seek assistance if any concerns. Preoperative Some participants reported employing mental techniques to prepare for only eye surgery. An example provided was task visualisation, a technique in which the surgeon uses their imagination to mentally rehearse the procedure beforehand. Choice of patient position on the surgical list was deemed important by several participants. Exactly which spot was considered the best did vary, however, there seemed to be strong agreement that first on the list should be avoided to allow ‘warm up’. One participant suggested leaving the most surgically complicated cases until last to avoid any time constraints but did not limit this to only eye cases. Several participants reported they would personally ensure every piece of equipment that might be required for the surgery was available for their only eye cases. I make sure that I have everything in theatre and nurses don't have to go looking for things, just in case I need something. (S39) Many participants considered it important to have a well-trained and experienced surgical team on the day. A consistent and familiar team helped some of the participants feel more comfortable about the surgery. I think in these ones you really need to be 100 percent sure that your team’s good … I will only do only eyes at the place (hospital) where I get most of my regular eye team. (S63) It was the experience of some participants that clearly communicating a patient’s only eye status with the theatre staff in advance may help ensure the surgeon had the best personnel assisting on the day. Postoperative Participants described strategies to maximise patient safety in the immediate postoperative period, where there may be an increased risk of injury or falls in the context of reduced vision. Many participants preferred topical or general anaesthetic for cataract surgery in only eye patients, so that the patient could see immediately after surgery. If a local anaesthetic block was used, short-acting anaesthetic agents were preferred over long-acting agents to avoid an extended period of reduced vision. Covering the patient’s eye at completion of the surgery, in a manner that would obscure their vision, was avoided if possible. Only eye patients would be kept in the recovery bay for a longer period of observation until they could demonstrate an ability to navigate safely. They would only be discharged in the presence of a support person, or if necessary, admitted to hospital overnight. Some participants recommended closer postoperative follow-up for all only eye patients, such that if there were any complications, they would be alerted to them at an earlier stage. Others were comfortable with routine follow-up, provided the surgery had gone well. It was noted that only eye patients are often very keen to be reviewed more frequently. Participants ensured their only eye patients were adequately educated regarding what to expect postoperatively and had a direct access channel to seek assistance if they encountered any problems. Many would provide their personal phone number or the details of the local eye hospital emergency department. Some participants reported employing mental techniques to prepare for only eye surgery. An example provided was task visualisation, a technique in which the surgeon uses their imagination to mentally rehearse the procedure beforehand. Choice of patient position on the surgical list was deemed important by several participants. Exactly which spot was considered the best did vary, however, there seemed to be strong agreement that first on the list should be avoided to allow ‘warm up’. One participant suggested leaving the most surgically complicated cases until last to avoid any time constraints but did not limit this to only eye cases. Several participants reported they would personally ensure every piece of equipment that might be required for the surgery was available for their only eye cases. I make sure that I have everything in theatre and nurses don't have to go looking for things, just in case I need something. (S39) Many participants considered it important to have a well-trained and experienced surgical team on the day. A consistent and familiar team helped some of the participants feel more comfortable about the surgery. I think in these ones you really need to be 100 percent sure that your team’s good … I will only do only eyes at the place (hospital) where I get most of my regular eye team. (S63) It was the experience of some participants that clearly communicating a patient’s only eye status with the theatre staff in advance may help ensure the surgeon had the best personnel assisting on the day. Participants described strategies to maximise patient safety in the immediate postoperative period, where there may be an increased risk of injury or falls in the context of reduced vision. Many participants preferred topical or general anaesthetic for cataract surgery in only eye patients, so that the patient could see immediately after surgery. If a local anaesthetic block was used, short-acting anaesthetic agents were preferred over long-acting agents to avoid an extended period of reduced vision. Covering the patient’s eye at completion of the surgery, in a manner that would obscure their vision, was avoided if possible. Only eye patients would be kept in the recovery bay for a longer period of observation until they could demonstrate an ability to navigate safely. They would only be discharged in the presence of a support person, or if necessary, admitted to hospital overnight. Some participants recommended closer postoperative follow-up for all only eye patients, such that if there were any complications, they would be alerted to them at an earlier stage. Others were comfortable with routine follow-up, provided the surgery had gone well. It was noted that only eye patients are often very keen to be reviewed more frequently. Participants ensured their only eye patients were adequately educated regarding what to expect postoperatively and had a direct access channel to seek assistance if they encountered any problems. Many would provide their personal phone number or the details of the local eye hospital emergency department. Many participants agreed that it was beneficial to have mentors or experienced colleagues that they could talk to about difficult only eye cases. This could be during the decision-making process on whether to operate on an only eye. Find it particularly useful when deciding re filtering surgery in progressive glaucoma where pressures are normal. (S3) Or it could be for advice prior to performing a less familiar procedure. Sometimes you're forced to do an unusual procedure in these extreme cases and well, you know, that’s when you consult as widely as you can with your colleagues. (S54) The mere act of involving another colleague also provided a degree of psychological reassurance for some participants. A problem shared is a problem halved (S3) Participants acknowledged that as the surgeon they carried ultimate responsibility for their patients, and they felt an additional weight of responsibility for their only eye patients. There is always a feeling of greater responsibility in operating on these patients. (S165) I often feel me as the surgeon carries the burden. (S1) Some participants revealed that they experience stress in the lead up to difficult only eye surgery. I don’t sleep well before doing a complex only eye e.g, small pupil, shallow anterior chamber. (S33) Other participants acknowledged the stress caused by only eye surgery but did not necessarily perceive it in a negative way. When things start to go pear-shaped, it’s how you interpret your physiological responses. If you interpret the tachycardia as fear, you are in trouble. But if you interpret it as your body preparing you to meet the challenge you will be well placed to deal with it. (S44) Some participants found that meditation techniques, such as mindfulness, helped them manage their stress. Participants were acutely aware of how an adverse surgical outcome could have catastrophic consequences for an only eye patient. Losing an eye from surgery is devastating, but to realise you have blinded somebody that is, made them dependent for the rest of their lives is a very heavy blow to your psyche. (S1) Participants described how a negative past experience, such as an adverse surgical outcome in an only eye patient that had occurred under their care, could still affect them psychologically. A divergent theme emerged within the data that appeared to disagree with the sentiment that only eye surgery should be approached differently. Some participants argued that all eyes were equally important, and that any approach that potentially led to safer outcomes for only eye patients should be applied to all patients. Should we not be treating every eye surgery as if it is the only eye always? (S39) Each eye in every person is important and deserves full attention, whether or not it’s the only one remaining for that particular individual. (S146) Some participants acknowledged the additional psychological burdens that could arise when performing only eye surgery but believed that devoting too much attention to this was more of a hinderance than help. These participants implicitly agreed that only eye surgery was different, but despite this, surgeons should try and keep their approach the same. Maybe we are making this more stressful than it should be … I agree that I would never sleep if I thought of the consequences to my patient’s life if it went wrong. You do the best for all your patients each time you operate without putting extra stress on yourself and staff? (S12) The decision to operate on an only eye should be individualised for each patient. The decision is shared and involves collaboration between both patient and surgeon. In our study, surgeons regarded being able to explore and address individual patient concerns as an important step in the only eye surgery decision-making process. Patients are understandably more anxious when it comes to surgery on their only eye. Individual variation in patients’ level of anxiety may exist due to differences in personality and risk tolerance, but past experiences may also play a role. A patient who lost one eye due to a surgical complication would probably be much more fearful of undergoing surgery in their good eye than a patient who had long-standing poor vision secondary to amblyopia. A patient who has already undergone successful vision improving surgery to their only eye may be more anxious about subsequent surgery that will place this ‘better vision’ at risk. Some patients may require multidisciplinary input including psychological support to assist them through the surgical process. This study highlighted the use of second opinions as a valuable tool to assist with the only eye surgery decision-making process. Patients may seek a second opinion to acquire more information and help them make a sound treatment decision. Doctors may seek a second opinion when a difficult clinical decision needs to be made. Some ophthalmic surgeons in our study reported engaging the advice of a colleague when faced with a difficult decision about whether to offer surgery to an only eye patient, such as whether to perform filtration surgery for progressive glaucoma with normal intraocular pressures. A formalised version of this process exists for cardiothoracic surgeons in the UK, where high-risk patients are referred to a Surgical Council or ‘Star Chamber’ and a group of surgeons collectively decide on the best course of management. There was evidence of a differential threshold at which surgery is performed on an only eye compared with a binocular patient’s eye. Many surgeons in our study adopt a higher threshold to offer surgery to their only eye patients. This can be considered under the broader concept of material risk, which is defined as risk that a reasonable person in the patient’s position would be likely to attach significance to, as opposed to a ‘one-size-fits all’ approach. Using the test of materiality, greater risk is attached to performing eye surgery in an only eye patient compared with performing the same surgery in a binocular patient. As evidenced by major court rulings in the USA, Australia and more recently the UK, there has been a shift in thinking over the past few decades such that doctors now have a legal duty to disclose material risk during the consent process. This means that in order to adequately consent only eye patients for surgery, they must not only be made aware of the surgical risks, but also fully appreciate the impact that these may have in the context of their only eye status. This necessitates that extra time is spent on the consent process for these patients. Our study identified that the presence of a support person such as a family member or close friend of the patient during the only eye surgery consent process was considered critical by many surgeons. Patients worldwide look to family and community for help with important decisions. In the surgical context, a support person could help provide emotional support, input into the treatment decision and assistance with information recall later. Surgeons in our study found that engaging surgical peers or mentors was useful when managing only eye patients. The importance of mentorship as part of surgical training is well established. After completing surgical training, mentoring relationships remain important throughout one’s career. Even among fully qualified surgeons, there exists individual variation in expertise and experience. Our study found that surgeons may consult widely with their colleagues prior to performing a complex or unusual procedure that is required in a challenging only eye case. With the increasing digitisation of ophthalmic clinical assessment tools and the rapid uptake of telehealth during the current worldwide pandemic, access to colleague support should be better than ever before. Online services that provide patients with access to second opinions already exist. The cultivation of both informal and formal peer networks where surgeons can seek advice on the management of only eye patients should be encouraged. A unique aspect of operating on a patient’s only eye is safely managing their low vision in the post-operative period. Poor vision correlates with increased risk of falls and injury. Surgeons in our study would adjust their approach during only eye cases to maximise the patient’s vision in the immediate postoperative period. Such adjustments included selecting general anaesthesia, topical anaesthesia or a short-acting block as the preferred mode of anaesthesia and minimising or avoiding patching the eye at the completion of surgery. A longer period of observation is generally recommended to ensure the patient is safe for discharge post operatively. Alternatively, admission to hospital may be required if they do not have a safe supervised home environment to discharge to. An adverse outcome in only eye surgery can have catastrophic consequences for the patient. It may result in total blindness and loss of independence, completely transforming their way of life. It is known that serious complications can have negative emotional impacts on the surgeon too, the so-called ‘second victim’ phenomenon. This may lead to feelings of guilt, burnout or depression. Surgeons in our study were acutely aware of the devastating consequences of a poor outcome in only eye surgery, and how it could affect their own psyche. They reported how the negative thoughts associated with such an event could persist long term. Jones et al found similar experiences among surgeons who had lost an only eye. Surgeons in their study felt like there was a lack of formal support available for individuals going through this. Available literature suggests it is important that the second victim receives emotional support. This may be accomplished through extensive and open discussion with peers, family and counsellors. More research needs to be done into the emotional effect that complications can have on surgeons and the optimal way of managing and preventing these. There was a subset of surgeons in our study who felt that all eye surgery should be treated the same. It is interesting to note that there is inherent logical inconsistency in this approach. All surgeons stratify patients (with one or two eyes) for complexity and risk. Only eye surgery is a subset of high-stakes surgery, and one would not advise training a new scrub-nurse or junior anaesthetist on these cases. Therefore, one can assume that surgeons do not treat all patients in the same way—variation in approach is key in more complex work. We propose integrating the key elements of the only eye surgery process into a conceptual framework . Along the vertical axis are the three separate entities involved in the process: the patient, surgeon and operating theatre. As this study focuses on the surgeon experience of only eye surgery, the surgeon finds themselves in the centre of this axis. The horizontal axis is a timeline extending from pre-operative to postoperative phases. Within the figure each of the five elements are positioned relative to their temporal position in the surgical process along the horizontal axis and relative to the directly involved entity along the vertical axis. Surgical decision is generally followed by the consent process. Note that there may not necessarily be such a clear distinction between the two in real life. The actual signing of the consent form is the final step in the consent process and should be considered a small although essential part. Psychological challenges for the patient and surgeon are present throughout the entire surgical process. Colleague support is a prominent factor early on but dwindles in the intraoperative and postoperative phases. While the surgeon must be cognisant of risk minimisation through the entire process, the operating theatre’s involvement is limited to the perioperative period. This framework allows us to consider the process of only eye surgery from beginning to end and highlights the relevant considerations at each point along the way. We would like to address the strengths and weaknesses of this study. The demographic breakdown of participants suggests it captures the views of a senior group of ophthalmology consultants within the Australian context. We believe that this is an advantage, as the data is drawn from cumulative decades of only eye surgery experience. Most surgeons in the study were male, however this approximately reflects the current sex distribution of ophthalmology fellows in Australia and New Zealand. Although these professional development webinars have covered various topics in the past and are generally attended by a diverse rather than subspecialised group of ophthalmologists, is important to note that only eye surgery was the key topic of the 2019 webinar from which surgeons were recruited for this study, and thus the possibility of selection bias must be considered. The focus groups in this study were quite large, containing up to 26 surgeons. This has the benefit of capturing a great breadth of experiences. Conversely, large focus groups may limit each surgeon’s opportunity to share their individual insights, particularly those who may be less confident. This was offset in our study through the availability of a discussion text box and anonymous comment section, which have been shown to raise participation rates and create equalisation among study participants. Examining if there are differences in the approach to only eye surgery within the various ophthalmic surgical subspecialties would be an interesting avenue for further research. Our data supported many of the findings of the previous study into only eye surgery by Jones et al . The focus group design and larger number of participants in our study allowed for the dynamic exchange of viewpoints among participants and gave extra breadth to the results, although this approach did limit the depth of analysis achievable compared with face-to-face interviews. Aside from proposing a new conceptual model, there are several important findings that our study adds to the current literature. These include practical tips on reducing risk and managing low vision in the perioperative period, the importance of having a support person involved in the consent process and the existence of a differential threshold at which surgeons may be willing to perform surgery on a monocular patient. Although we would like to highlight the decision to operate on an only eye is complex and individualised, involving close collaboration between both patient and surgeon. This study provides a broad insight into how only eye surgery is perceived by ophthalmic surgeons. We have identified unique and important aspects of the only eye surgery process that the surgeon should be aware of. We have proposed a conceptual framework to help guide surgeons, which has the potential to promote a more unified approach to treating this high-stakes cohort. The findings of this study are relevant to surgeons, trainees and surgical educators involved in the care of the only eye patient population. Promoting awareness and skills development in the key areas identified in this study may lead to better patient experience and outcomes. Reviewer comments Author's manuscript
Machine learning for emerging infectious disease field responses
6c6c227c-4a97-449c-b843-e6141214fc75
8748708
Preventive Medicine[mh]
Emerging infectious diseases (EIDs), including the severe acute respiratory syndrome (SARS) (2003) , H1N1 influenza virus (2009) , Middle East respiratory syndrome coronavirus (MERS-CoV) (2012) , and coronavirus disease 2019 (COVID-19) pandemic , emerged and raised global public health crises in recent decades. Without existing protective immunity at both individual and population levels, an emerging infectious disease may spread efficiently and lead to massive severe cases and mortality in the community . In particular, with a highly contagious novel respiratory infectious disease , medical resources, including medications, personal protective and life-supporting equipment, may be quickly exhausted once hospitals are overwhelmed with infected patients , . It may inevitably cause excessive mortality as demonstrated in many countries during the 2020–2021 COVID-19 pandemic , . As the clinical spectrum of emerging respiratory infections may range from asymptomatic or mild respiratory symptoms to severe pneumonia or acute respiratory distress syndrome , , it is therefore imperative for first-line physicians to prioritize scarce medical resources for critically ill patients and early symptomatic patients with high risk of rapid progression and death , . However, in the early stage of the outbreak of a novel respiratory infectious disease, there is usually no prior knowledge and available guidelines for the physicians to optimize medical decisions. Accordingly, it is of interest to investigate how to exploit machine learning (ML) technologies to cope with this challenge. In recent years, ML technologies have been widely exploited in medical and public health research – . ML algorithms are highly effective in analyzing interactions among multiple, complex variables in clinical databases and making accurate predictions, while it may take a medical practitioner months or even years to accumulate sufficient experience to develop a decision making process. However, there are a wide range of ML algorithms with very different characteristics and design goals. At one end of the spectrum, advanced ML algorithms such as the deep neural network (DNN) , and the support vector machine (SVM) employ complicated non-linear transformations to achieve superior prediction accuracy. However, due to the complicated non-linear transformations involved, it is essentially impossible to figure out how these kinds of ML algorithms make predictions. At the other end of the spectrum, ML algorithms such as decision trees (DT) – and the naïve Bayesian classifier follow highly interpretable decision processes to make predictions but may suffer inferior prediction accuracy due to lack of non-linear transformations involved in the prediction process. The trade-off between prediction accuracy and interpretability with alternative ML algorithms may be an everlasting dilemma depending on different clinical applications. As pointed out by Flaxman and Vos, for some applications, using an explainable approach is more understandable and favourable for physicians even when it results in a slight reduction in accuracy . As ML technologies have been widely exploited in medical and public health research, it is not surprising to observe that scientists have been developing ML-based prediction models to address the challenges faced in the recent COVID-19 pandemic – . Several prediction models have been proposed to identify those COVID-19 infected patients with a high risk of progression to severe diseases – or even death – . These studies extracted hospital COVID-19 cohorts, which included clinical presentations, laboratory data, and even images, to predict the risk of severe diseases and fatality. In this study, we have aimed to address the challenges brought by an EID disaster from the aspect of preventive medicine. Accordingly, we have incorporated only age, sex, and comorbidities as features to build the ML based prediction models for identifying the population at risk of severe diseases before infection. The proposed ML models are of significant merit when health policymakers need to identify high risk populations and then develop a prioritized vaccination strategy accordingly. For this scenario, we have developed prediction models that can provide health policymakers with explicit decision rules. These decision rules can also be exploited to educate the people with high risk to seek medical treatments promptly once they develop symptoms. In the recent COVID-19 pandemic, almost all countries with community outbreaks experienced unprecedented mortality due to the collapse of their healthcare systems. In such a scenario, the frontline physicians could incorporate our proposed prediction models to triage patients without laboratory tests, which could become scarce during a pandemic, in order to discharge patients with minimal risk. In this study, we have developed three types of prediction models, namely, the DT models , , the state-of-the-art DNN models , , as well as the conventional logistic regression-based prediction models. We have further conducted comprehensive analyses on the performance delivered by different types of prediction models. Data collection and outcome measurement We conducted this study based on the reimbursement data of one million randomly sampled subjects extracted from the de-identified National Health Insurance Research Database (NHIRD) in Taiwan. Figure shows the process to generate the cohort. We began with 92,376 hospitalized ILI cases during January 2005 to December 2010. Supplementary Table lists the ICD-9-CM (International Classification of Diseases, 9th Revision, Clinical Modification) codes employed to define an ILI case, which were identified through syndromic surveillance and intensive discussions among Taiwanese physicians , . The information retrieved from ILI patients’ records included age, gender, and 19 comorbidities/conditions [heart disease, peripheral vascular disease, hypertension, cerebrovascular accident (CVA), neurological disease, pulmonary disease, allergic rhinitis, autoimmune disease, liver disease, diabetes, hyperthyroidism, hypothyroidism, renal disease, metastatic cancer, cancer without metastasis, leukaemia/lymphoma, acquired immunodeficiency disease, tuberculosis, mental illness, and pregnancy/postpartum women]. These comorbidities were identified based on a literature review and thorough consensus reached by physicians of infection, emergency medicine, occupational health and infectious disease epidemiologists . The corresponding ICD-9-CM codes employed to identify the 19 comorbidities are shown in Supplementary Table , which were defined based on the Charlson , , Deyo and Elixhauser measurements plus information from the Taiwanese Catastrophic Illness Card. Presence of a comorbidity was defined based on whether the patient was coded with the corresponding ICD-9-CM codes within 12 months prior to the index date of the ILI-related hospitalisation. With the initial 92,376 hospitalized ILI cases, we excluded 250 cases with unrecognized identity, that left against medical advice from hospitals, or committed suicide and additional 2687 cases with incomplete records. Then, we merged two consecutive records of the same patient if these two consecutive records were within 14 days. In the end, a cohort containing 83,227 cases was created (Fig. ). In the end, a cohort containing 83,227 cases was created (Fig. ) and the demographic analysis of the cohort is presented in Supplementary Table . The outcome of concern was severe ILI, which was defined as the occurrence of fatality or requiring critical cares such as intubation, ventilator support, extracorporeal membrane oxygenation treatment, admission to an intensive care unit during the hospitalization period. The study was approved by the Research Ethics Committee of the National Taiwan University Hospital (ID: 201603086RINB, April 14, 2016), and was performed in accordance with the Declaration of Helsinki. Experimental procedures Figure shows the experimental procedure employed in this study to analyze the performance delivered by different types of prediction models. The analysis began with a 2-stage feature selection process. In the first stage, we employed the conventional logistic regression (LR) analysis to eliminate those features that were uncorrelated to the outcome variable. Then, in the second stage, two advanced multivariate analysis methods, namely being the least absolute shrinkage and selection operator (LASSO) method and the ensemble variant of minimum redundancy maximum relevance (mRMRe) method , , were employed along with the proposed DT-based method to determine the minimal subsets of the features without compromising prediction performance. With the three feature sets output by the LASSO, the mRMRe, and the proposed DT-based method, we proceeded to build the DT – , the LR , and the deep neural network (DNN) prediction models. Finally, the performance of these different prediction models were evaluated using 10-fold cross validation . Feature selection The feature selection process began with the 21-variable feature set shown in Table . Table also shows the results of the first-stage LR based analysis. Since the p-values with mental illness, hypothyroidism, and hyperthyroidism were higher than 0.05, these three comorbidities were excluded. With the remaining 18 variables, we proceeded to carry out the DT-based multivariate analysis proposed in this study. In this procedure, the DT package shown in Supplementary Table was employed and parameters prior, which specifies the prior priority of positive cases, and cp, which controls the complexity of the output tree, were set to different values in order to generate models with various sensitivity levels. Supplementary Table shows the DT models that delivered sensitivity at the 85%, 90%, and 95% levels, respectively. Then, we selected the 6 variables that were consistently present in all of these DT models. To evaluate the effectiveness of the proposed DT-based multivariate analysis, we further incorporated the LASSO and the mRMRe , methods to extract another two 6-variable feature sets from the 18-variable feature set output by the first-stage feature selection process. Then, we proceeded to build the DT models, the LR models, and the DNN models based on these three 6-variable feature sets for performance evaluation. The development of prediction models In this study, we followed the same rationale presented in our previous work to develop two types of machine learning-based prediction models, namely the DT – and DNN models . The performance of the DT models is of interest due to the explicit decision rules produced by the DT algorithm, which is a unique feature favored by clinicians. However, the algorithm for building a DT model is based on univariate analysis and does not incorporate any linear or non-linear transformation. As a result, the prediction performance of the DT models may not match the advanced prediction models when applied to those datasets in which different classes of samples are separated by non-linear boundaries. In this respect, with the advantage of non-linear transformations, the state-of-the-art DNN models generally can deliver superior prediction performance in comparison with other types of prediction models . However, a DNN based model typically contains a large quantity of coefficients and therefore it is almost impossible for clinicians to figure out the logic embedded in the prediction process. In this study, we further investigated how the conventional LR models , performed because logistic regression is widely used in medical and epidemiological research. Supplementary Table summarizes the software packages and parameter settings employed to build the DT models and the main characteristics of the DNN models. With respect to the structure of the DNN models, we actually investigated the performance of more complicated networks and observed that the simple network structure shown in Supplementary Table delivered the same level of performance in comparison with more complicated network structures. In this respect, we experimented with network dimensions of 8, 16, 24 and 32 and set the number of layers to 3 and 4. Model performance evaluation To evaluate model performance, we employed 10-fold cross validation to evaluate the performance of our prediction models . As shown in Supplementary Table , in order to generate the DT models with alternative performance characteristics, e.g. different levels of sensitivity, we set the prior and cp parameters to various values. For generating the LR models and the DNN models with alternative performance characteristics, we varied the cutoff values at the outputs in order to discretize the numerical outputs into binary states. Model performance was evaluated based on several metrics, including accuracy, sensitivity, specificity, positive predicted value (PPV), negative predicted value (NPV), as well as three additional metrics designed to report the overall performance of the prediction models, namely, the F1 score , the Matthews correlation coefficient (MCC) , and the area under the receiver operating characteristics curve (AUC) (Supplementary Table ). In the subsequent discussions regarding the performance delivered by various prediction models, we will focus on the F1 score, which is defined to be the the harmonic mean of the PPV and the sensitivity delivered by a prediction model and is a widely used performance metric in the machine learning research community. In recent years, scientists in the biomedical research communities have also started to incorporate the F1 score to report their performance data . We conducted this study based on the reimbursement data of one million randomly sampled subjects extracted from the de-identified National Health Insurance Research Database (NHIRD) in Taiwan. Figure shows the process to generate the cohort. We began with 92,376 hospitalized ILI cases during January 2005 to December 2010. Supplementary Table lists the ICD-9-CM (International Classification of Diseases, 9th Revision, Clinical Modification) codes employed to define an ILI case, which were identified through syndromic surveillance and intensive discussions among Taiwanese physicians , . The information retrieved from ILI patients’ records included age, gender, and 19 comorbidities/conditions [heart disease, peripheral vascular disease, hypertension, cerebrovascular accident (CVA), neurological disease, pulmonary disease, allergic rhinitis, autoimmune disease, liver disease, diabetes, hyperthyroidism, hypothyroidism, renal disease, metastatic cancer, cancer without metastasis, leukaemia/lymphoma, acquired immunodeficiency disease, tuberculosis, mental illness, and pregnancy/postpartum women]. These comorbidities were identified based on a literature review and thorough consensus reached by physicians of infection, emergency medicine, occupational health and infectious disease epidemiologists . The corresponding ICD-9-CM codes employed to identify the 19 comorbidities are shown in Supplementary Table , which were defined based on the Charlson , , Deyo and Elixhauser measurements plus information from the Taiwanese Catastrophic Illness Card. Presence of a comorbidity was defined based on whether the patient was coded with the corresponding ICD-9-CM codes within 12 months prior to the index date of the ILI-related hospitalisation. With the initial 92,376 hospitalized ILI cases, we excluded 250 cases with unrecognized identity, that left against medical advice from hospitals, or committed suicide and additional 2687 cases with incomplete records. Then, we merged two consecutive records of the same patient if these two consecutive records were within 14 days. In the end, a cohort containing 83,227 cases was created (Fig. ). In the end, a cohort containing 83,227 cases was created (Fig. ) and the demographic analysis of the cohort is presented in Supplementary Table . The outcome of concern was severe ILI, which was defined as the occurrence of fatality or requiring critical cares such as intubation, ventilator support, extracorporeal membrane oxygenation treatment, admission to an intensive care unit during the hospitalization period. The study was approved by the Research Ethics Committee of the National Taiwan University Hospital (ID: 201603086RINB, April 14, 2016), and was performed in accordance with the Declaration of Helsinki. Figure shows the experimental procedure employed in this study to analyze the performance delivered by different types of prediction models. The analysis began with a 2-stage feature selection process. In the first stage, we employed the conventional logistic regression (LR) analysis to eliminate those features that were uncorrelated to the outcome variable. Then, in the second stage, two advanced multivariate analysis methods, namely being the least absolute shrinkage and selection operator (LASSO) method and the ensemble variant of minimum redundancy maximum relevance (mRMRe) method , , were employed along with the proposed DT-based method to determine the minimal subsets of the features without compromising prediction performance. With the three feature sets output by the LASSO, the mRMRe, and the proposed DT-based method, we proceeded to build the DT – , the LR , and the deep neural network (DNN) prediction models. Finally, the performance of these different prediction models were evaluated using 10-fold cross validation . The feature selection process began with the 21-variable feature set shown in Table . Table also shows the results of the first-stage LR based analysis. Since the p-values with mental illness, hypothyroidism, and hyperthyroidism were higher than 0.05, these three comorbidities were excluded. With the remaining 18 variables, we proceeded to carry out the DT-based multivariate analysis proposed in this study. In this procedure, the DT package shown in Supplementary Table was employed and parameters prior, which specifies the prior priority of positive cases, and cp, which controls the complexity of the output tree, were set to different values in order to generate models with various sensitivity levels. Supplementary Table shows the DT models that delivered sensitivity at the 85%, 90%, and 95% levels, respectively. Then, we selected the 6 variables that were consistently present in all of these DT models. To evaluate the effectiveness of the proposed DT-based multivariate analysis, we further incorporated the LASSO and the mRMRe , methods to extract another two 6-variable feature sets from the 18-variable feature set output by the first-stage feature selection process. Then, we proceeded to build the DT models, the LR models, and the DNN models based on these three 6-variable feature sets for performance evaluation. In this study, we followed the same rationale presented in our previous work to develop two types of machine learning-based prediction models, namely the DT – and DNN models . The performance of the DT models is of interest due to the explicit decision rules produced by the DT algorithm, which is a unique feature favored by clinicians. However, the algorithm for building a DT model is based on univariate analysis and does not incorporate any linear or non-linear transformation. As a result, the prediction performance of the DT models may not match the advanced prediction models when applied to those datasets in which different classes of samples are separated by non-linear boundaries. In this respect, with the advantage of non-linear transformations, the state-of-the-art DNN models generally can deliver superior prediction performance in comparison with other types of prediction models . However, a DNN based model typically contains a large quantity of coefficients and therefore it is almost impossible for clinicians to figure out the logic embedded in the prediction process. In this study, we further investigated how the conventional LR models , performed because logistic regression is widely used in medical and epidemiological research. Supplementary Table summarizes the software packages and parameter settings employed to build the DT models and the main characteristics of the DNN models. With respect to the structure of the DNN models, we actually investigated the performance of more complicated networks and observed that the simple network structure shown in Supplementary Table delivered the same level of performance in comparison with more complicated network structures. In this respect, we experimented with network dimensions of 8, 16, 24 and 32 and set the number of layers to 3 and 4. To evaluate model performance, we employed 10-fold cross validation to evaluate the performance of our prediction models . As shown in Supplementary Table , in order to generate the DT models with alternative performance characteristics, e.g. different levels of sensitivity, we set the prior and cp parameters to various values. For generating the LR models and the DNN models with alternative performance characteristics, we varied the cutoff values at the outputs in order to discretize the numerical outputs into binary states. Model performance was evaluated based on several metrics, including accuracy, sensitivity, specificity, positive predicted value (PPV), negative predicted value (NPV), as well as three additional metrics designed to report the overall performance of the prediction models, namely, the F1 score , the Matthews correlation coefficient (MCC) , and the area under the receiver operating characteristics curve (AUC) (Supplementary Table ). In the subsequent discussions regarding the performance delivered by various prediction models, we will focus on the F1 score, which is defined to be the the harmonic mean of the PPV and the sensitivity delivered by a prediction model and is a widely used performance metric in the machine learning research community. In recent years, scientists in the biomedical research communities have also started to incorporate the F1 score to report their performance data . To conduct a comprehensive performance analysis, we built different types of prediction models with alternative feature sets, Table summaries the F1 scores delivered by these prediction models and the comprehensive performance data is shown in Supplementary Tables a–c. The alternative feature sets incorporated to build the prediction models included the three 6-variable feature sets identified by the proposed DT-based analysis, the mRMRe, and the LASSO, along with the 18-variable feature set identified by the logistic regression based analysis in the first stage of the feature selection process. With respect to performance data shown in Table and Supplementary Tables a–c, the first observation is that the DNN model built with 18 variables performed marginally superior to the other prediction models shown in Table . For example, under the column of 85% sensitivity, the F1 score of 0.452 delivered by the DNN model built with 18 variables is marginally higher than the other F1 scores delivered by the three DNN models built with the three different 6-variable feature sets, which were 0.447, 0.438, and 0.437, respectively. This observation implies that no significant information was lost when we employed only 6 variables. The second observation is that all these different types of prediction models built with alternative 6-variable feature sets basically delivered the same level of performance. For example, under the column with 85% sensitivity, the F1 scores delivered by different prediction models built with different 6-variable feature sets are all within the range from 0.433 to 0.447. Accordingly, in the following discussion, we will focus on the DT models built with 6 variables because the explicit prediction logic output by the DT algorithm was highly valuable with respect to clinical applications. The third observation is that the DT models built with the 6-variable feature set identified by the proposed DT-based method performed marginally superior to the DT models built with the 6-variables features sets identified by the mRMRe and the LASSO. For example, under the column with 85% sensitivity, the F1 scores delivered by the DT models built with the 6-variables features sets identified by the proposed DT-based method, the mRMRe, and the LASSO are 0.446, 0.438, and 0.437, respectively. While the discussions above focus on the F1 scores, Supplementary Fig. S shows the receiver operating curves of alternative prediction models. Though we can observe marginal differences among the areas under the curve (AUCs) delivered by alternative prediction models, all receiver operating curves essentially overlap in the region above sensitivity 85%. As decision-makers like to know how to allocate resources most appropriately under different scenarios, Fig. a–c shows the DT models that delivered 95%, 90%, and 85% sensitivities, respectively. Since age was placed at the top level of the tree structures in all these three models, it implied that age was the most crucial factor. The DT model with 95% sensitivity revealed that patients aged over 37.79 or under 0.54 years suffered high risk for severe ILI. Furthermore, the following two groups of patients also suffered high risk for severe ILI: (1) patients aged between 14.21 and 37.79 with heart disease, CVA, diabetes, metastatic cancer; and (2) male gender aged between 34.46 and 37.79 (Fig. a). The DT model with 90% sensitivity revealed that those patients older than 66.04 years-old suffered the highest risk of progression to severe illness. Furthermore, those female patients aged between 41.46 and 66.04 and with CVA, diabetes, heart disease, and metastatic cancer also suffered high risk for severe ILI (Fig. b). The DT model with 85% sensitivity identified the following three groups of patients that suffered high risk of severe ILI: (1) patients older than 66.04; (2) male patients aged between 41.46 and 66.04 with heart disease, metastatic cancer, CVA, and diabetes; and (3) female patients aged between 41.46 and 66.04 and with CVA (Fig. c). Overall, 31.0% (25,780/83,227), 41.7% (34,681/83,227) and 48.3% (40,187/83,227) of those hospitalized ILI patients were predicted to have low risk of progression to severe ILI by the three DT models with 95%, 90% and 85% sensitivity, respectively (Fig. ). Table shows the relative risks and NPV delivered by the DT models with different levels of sensitivity. The relative risk compares the risk of progression to severe illness between the group of patients predicted by the DT model to be positive and the group of patients predicted to be negative. In field applications, the relative risk provides the public health administrators and the physicians with an instinctive understanding about how successfully the prediction model partitions the high-risk patients and the low-risk patients. As shown in Table , the relative risks delivered by the DT models with 95% sensitivity, 90% sensitivity, and 85% sensitivity were 10.15, 6.93, and 5.50, respectively, these values imply that the group of patients predicted by the DT models to be positive did in fact have significantly higher risk than the group of patients predicted to be negative. Table also show that the NPVs of the DT models with different levels of sensitivity are all over 95%. The high NPVs imply that only very small percentages of the patients predicted to be negative were false negatives. Finally, as our cohort is imbalanced, containing 14,995 positive cases and 68,232 negative cases, we employed the random over-sampling examples (ROSE) package in R to address this issue. Supplementary Tables a–c show the results with the ROSE package incorporated. One obvious observation is that no significant difference exists between the data shown in Supplementary Tables a–c and those shown in Supplementary Tables a–c. We have conducted a comprehensive analysis on how to exploit machine learning algorithms to stratify the risk of severe illness or death among hospitalized ILI patients. There were three major findings in this study. Firstly, the three different types of prediction models investigated in this study, namely the DNN models, the LR models, and the proposed DT based models, delivered comparable performance in predicting severe ILI after hospitalization. Secondly, the tree structures of the DT models explicitly illustrated how predictions were made and provide valuable guidelines for clinicians to develop effective strategies for risk stratification of ILI patients. Thirdly, the clinicians can employ the DT models with an appropriate sensitivity level to cope with the availability of medical resources and public health needs in different epidemic stages of an EID disaster. With respect to the performance of the different types of prediction models, namely the DT models, the LR models, and the DNN models, our results may be confusing for some machine learning experts who strongly believe that the DNN models should prevail in most cases , , . However, how the DNN model performs in comparison with different types of prediction models really depends on how different classes of subjects, e.g. positive vs. negative, are distributed in the dataset. If different classes of subjects can be partitioned by linear geometric objects defined by a very limited number of features, then different types of prediction models may deliver comparable performance. In other words, the DNN models may not prevail in this case, which was exactly what we observed in this study. In fact, we also observed a similar result from one of our recent studies on dengue . With the DT models being able to deliver performance comparable to the state-of-the-art DNN models, the explicit prediction rules presented in the DT structures provide valuable references for developing effective clinical strategies. All the studied DT models with different sensitivities identified age seniority as the most critical risk factor for severe ILI. This result is in conformity with clinical experience as advanced age, along with comorbid medical conditions such as diabetes , , cirrhosis , malignant diseases , , etc., have been recognized as one of the crucial risk factors for severe ILI. Furthermore, the cutoffs employed by the DT models to partition age groups are in conformity with clinical insights. Nevertheless, these cutoffs along with the comorbidities identified in the DT structures provide clinicians with systematic clues regarding how to treat the patients most effectively when facing an EID. There are two scenarios in which the DT models developed in this study can be exploited. The first scenario is that a public health administrator may want to develop an effective vaccination policy. In this scenario, the decision rules output by the DT models can provide the health policymaker with a set of guidelines for prioritizing the groups of people with a high risk of disease progression to receive the vaccine. In this respect, as shown in Table , the relative risks delivered by the DT models with different levels of sensitivity were all over 5, which implies that the group of patients predicted to be positive suffered a significantly high risk of progression than the group of patients predicted to be negative. Depending on the coverage of the high-risk population to be achieved, the public health administrator can decide which DT model should be employed. For example, when the vaccine is just successfully developed, the quantity of the vaccine available may be limited. In this case, the public health administrators can adopt the decision rules provided by the DT model with a lower sensitivity, e.g. 85%. Once the production of the vaccine runs smoothly and there is an abundance of vaccine, the decision rules provided by the DT model with 95% sensitivity can be exploited to achieve herd immunity. In addition to the application described above, the decision rules output by the DT models can provide the general public with valuable health guidelines. These decision rules can remind those people with high risk to watch their health conditions closely and seek medical help once they suffer from mild symptoms. Another scenario in which the prediction models developed in this study could be incorporated is to optimize resource management at healthcare facilities once an EID disaster emerges. The DT models with different levels of sensitivity can be employed in different stages of an EID disaster (Fig. ). In the early stage of an EID disaster, when the healthcare capacities are not overloaded, the DT model with 95% sensitivity should be employed to identify patients with risk of disease progression so that they can be hospitalized and receive the best possible treatment , to minimize fatalities. As shown in Table , the DT model with 95% sensitivity could discharge 30.9% (25,780/83,227) of the admitted ILI patients from medical facilities with only 0.8% (635/83,227) patients were mistakenly discharged. As the development of the EID disaster progresses, the tremendous increase of the patient number and the surging demands for medical resources may rapidly exceed the capacities of medical facilities. In the recent COVID-19 pandemic, almost all countries with community outbreaks experienced unprecedented mortality due to the collapse of the healthcare systems. In this event, clinicians may be forced to triage patients without laboratory tests, which could become scarce during a pandemic, in order to discharge patients without potential risk for subsequent deterioration . Accordingly, the DT model with 85% sensitivity can be employed, which predicted 48.3% (40,187/83,227) of the admitted ILI patients to be without risk of progression and could be discharged to relieve the overload at medical facilities. The high NPV value delivered by the DT model with 85% sensitivity, which was 94.6% as shown in Table , suggests that only a small percentage of patients would be mistakenly discharged. There are several limitations in the current study. Firstly, the diagnosis of ILI was based on ICD-9-CM codes without laboratory confirmation of influenza. Nevertheless, ILI-related clinical syndromes may be the best surrogate diagnostic category representative of patients with community-onset respiratory infections that may progress towards severe illness and death , . Secondly, our dataset based on nation-wide insurance reimbursement data (claims data) does not include laboratory data, and other potential confounding factors that may influence the prognosis of respiratory infections, including obesity , , smoking , geographic distributions , and social economic conditions , which were not available in the NHIRD database. However, our model based on demographic data and comorbidities is useful in preventive measurements, such as public education and vaccination policy. Furthermore, physicians under shortage of resources during the pandemic have to use fewer laboratory test results to identify the population at risk. Thirdly, we did not investigate the performance of other advanced machine learning algorithms such as the support vector machine, random forests, Bayesian networks, etc. Nevertheless, it is generally observed that the DNN based prediction models can deliver comparable or ever superior performance when compared with other advanced machine learning algorithms. Fourthly, as our experimental data was extracted from a single national insurance reimbursement database, readers should be cautious to generalize our findings before further validation studies are conducted. In conclusion, our results showed that the DT-based prediction models delivered performance comparable to the DNN models in predicting ILI severity. The explicit prediction logic shown in the DT structures may be exploited to facilitate the decision-making process executed by clinicians. Furthermore, the DT models with alternative sensitivity levels can be exploited in different stages of an EID disaster to optimize medical resource allocation, which is crucial in the response to a large-scale epidemic of emerging infectious disease. Supplementary Information.
Impact of early versus late
4f13a5dd-4938-4b31-bf31-4562a8ada9e9
11752794
Surgical Procedures, Operative[mh]
Background and rationale {6a} Induction of labour (IOL) is one of the most common procedures performed in pregnant people, with approximately 40% of labours being induced at Te Toka Tumai Auckland . There are multiple induction agents that may be used for IOL, including prostaglandins, catheters and oxytocin infusion . There appears to be regional variation in induction and labour management in relation to artificial rupture of membranes (ARM), with some centres performing ARM liberally, while other centres are quite restrictive in the practice. People undergoing induction of labour in New Zealand commonly have an ARM early during the process, often immediately prior to oxytocin infusion. This differs from other settings where ARM is performed after the patient enters active labour, or not at all (meaning that spontaneous rupture of membranes [SRM] is awaited). Two Cochrane reviews have investigated the role of ARM. One review assessed performance of ARM during spontaneous labour and one assessed ARM to induce labour . Neither identified a clear benefit to ARM and hence it was not recommended by either group of authors . One potential issue with performing an early ARM is that the protective barrier between the uterine cavity and the vagina (the amniotic membrane) is now interrupted, allowing for ascent of bacteria from the vagina into the uterus. Moreover, this practice permits the fluid ‘cushion’ surrounding the fetus to be released and theoretically provides a greater chance for umbilical cord compression, leading to fetal heart rate decelerations. These theories that lead to a concern about early amniotomy are supported by evidence from three small trials . The first trial, involving 209 participants in the United States, identified a marked difference in people undergoing amniotomy in rates of chorioamnionitis (22.6% versus 6.8%, p = 0.002 in the early ARM versus late ARM groups, respectively) and variable decelerations (19.6% versus 6.4%, p = 0.08 in the early ARM versus later ARM groups, respectively) . The second trial including 168 people recruited in Israel identified an elevated risk of intrapartum fever (which we consider to be a surrogate for chorioamnionitis) in the early ARM group (8.7% versus 2.3%, RR 1.69; 95% CI 1.15–2.5) in addition to a higher risk of caesarean birth in the early ARM group (25% versus 7.9%, RR 1.74; 95% CI 1.30–2.34) . Lastly, a trial from India in which 150 women were randomised indicated shorter labours (7.35 h versus 11.66 h, p = 0.000) but a significant increase in caesarean rate (2.7% versus 10.7%, p = 0.049) when ARM was performed early . A further two trials indicated that early and late amniotomy yielded equivalent outcomes between groups for rates of infection . The first of these trials was carried out in Canada and was unfortunately closed to recruitment at 143 participants after 3 years because of the low recruitment rate . The primary outcome was rate of caesarean birth, which did not differ for nulliparous (18 vs 17%, p = 0.91) or parous (0 vs 3%, p = 1.0) people in the early and late amniotomy groups, respectively. This trial also indicated a non-significant trend towards fewer fevers in the early amniotomy group (3 vs 25%, p = 0.05). The second of these trials was performed in the United States and included 585 participants, all of them nulliparous . The primary outcomes of the trial were time from induction to birth (19.0 vs 21.3 h, p = 0.04) and percentage of people who gave birth within 24 h (68 vs 56, p = 0.002) in the early and the late amniotomy groups, respectively. Chorioamnionitis rates were equivalent between groups (11.5 vs 8.5%, p = 0.22). However, in this trial, most people in the early ARM group had ARM at 3 cm cervical dilation, making the trial incomparable to the New Zealand context, where amniotomy is frequently performed as soon as feasible, which is at a cervical dilation that permits the insertion of an amniohook instrument (often 1–2 cm of dilation). The findings of these trials are of significant import in the approach to IOL in New Zealand and internationally. Chorioamnionitis is a common occurrence in induced labour and is often a factor in the decision to perform an emergency caesarean birth and then in subsequent surgical infective complications. Chorioamnionitis entails maternal risks and requires treatment for both the mother and the infant during and after birth. Caesareans change the risks for the mother and infant but generally both have a higher risk of complications when a caesarean is performed after the commencement of labour (as occurs in the setting of labour induction). Chorioamnionitis is a risk factor for cerebral palsy and neonatal encephalopathy, even in term foetuses . Trial data regarding timing of amniotomy and infective and operative risk is mixed, showing either a benefit to delaying the procedure or no significant difference in outcomes between groups. There is some indication that induced labour may be slightly shorter with earlier amniotomy. None of the trials has been carried out in New Zealand, and both trials indicating equivalent outcomes in the early and late amniotomy groups were methodologically quite dissimilar to usual labour management in obstetric units in New Zealand. Indeed, the New Zealand national guideline on induction of labour identified timing of amniotomy as a research gap . Irrespective, the procedure is performed on approximately 15,000 people who have IOL in New Zealand each year . Therefore, ascertaining the ideal timing of ARM is important. To answer the question of whether early amniotomy causes an increased rate of chorioamnionitis in the setting of IOL, we are performing a randomised controlled trial. If our study indicates benefit to delaying amniotomy until active labour, this is a low-cost intervention (changing the timing of something that is done anyway) which could provide great benefit for women and babies. Objectives {7} The objective is to assess the rate of chorioamnionitis in women undergoing early versus late ARM. Trial design {8} The ARM trial is a randomised controlled trial being performed at a single institution in New Zealand, Te Toka Tumai Auckland. Participants undergoing oxytocin IOL are randomised to either ‘Early ARM’ or ‘Late ARM’ in a one-to-one ratio, stratified by parity (parity = 0 or ≥ 1). The primary hypothesis is that people undergoing ‘Late ARM’ will have a lower chance of developing chorioamnionitis than those people undergoing ‘Early ARM’. Induction of labour (IOL) is one of the most common procedures performed in pregnant people, with approximately 40% of labours being induced at Te Toka Tumai Auckland . There are multiple induction agents that may be used for IOL, including prostaglandins, catheters and oxytocin infusion . There appears to be regional variation in induction and labour management in relation to artificial rupture of membranes (ARM), with some centres performing ARM liberally, while other centres are quite restrictive in the practice. People undergoing induction of labour in New Zealand commonly have an ARM early during the process, often immediately prior to oxytocin infusion. This differs from other settings where ARM is performed after the patient enters active labour, or not at all (meaning that spontaneous rupture of membranes [SRM] is awaited). Two Cochrane reviews have investigated the role of ARM. One review assessed performance of ARM during spontaneous labour and one assessed ARM to induce labour . Neither identified a clear benefit to ARM and hence it was not recommended by either group of authors . One potential issue with performing an early ARM is that the protective barrier between the uterine cavity and the vagina (the amniotic membrane) is now interrupted, allowing for ascent of bacteria from the vagina into the uterus. Moreover, this practice permits the fluid ‘cushion’ surrounding the fetus to be released and theoretically provides a greater chance for umbilical cord compression, leading to fetal heart rate decelerations. These theories that lead to a concern about early amniotomy are supported by evidence from three small trials . The first trial, involving 209 participants in the United States, identified a marked difference in people undergoing amniotomy in rates of chorioamnionitis (22.6% versus 6.8%, p = 0.002 in the early ARM versus late ARM groups, respectively) and variable decelerations (19.6% versus 6.4%, p = 0.08 in the early ARM versus later ARM groups, respectively) . The second trial including 168 people recruited in Israel identified an elevated risk of intrapartum fever (which we consider to be a surrogate for chorioamnionitis) in the early ARM group (8.7% versus 2.3%, RR 1.69; 95% CI 1.15–2.5) in addition to a higher risk of caesarean birth in the early ARM group (25% versus 7.9%, RR 1.74; 95% CI 1.30–2.34) . Lastly, a trial from India in which 150 women were randomised indicated shorter labours (7.35 h versus 11.66 h, p = 0.000) but a significant increase in caesarean rate (2.7% versus 10.7%, p = 0.049) when ARM was performed early . A further two trials indicated that early and late amniotomy yielded equivalent outcomes between groups for rates of infection . The first of these trials was carried out in Canada and was unfortunately closed to recruitment at 143 participants after 3 years because of the low recruitment rate . The primary outcome was rate of caesarean birth, which did not differ for nulliparous (18 vs 17%, p = 0.91) or parous (0 vs 3%, p = 1.0) people in the early and late amniotomy groups, respectively. This trial also indicated a non-significant trend towards fewer fevers in the early amniotomy group (3 vs 25%, p = 0.05). The second of these trials was performed in the United States and included 585 participants, all of them nulliparous . The primary outcomes of the trial were time from induction to birth (19.0 vs 21.3 h, p = 0.04) and percentage of people who gave birth within 24 h (68 vs 56, p = 0.002) in the early and the late amniotomy groups, respectively. Chorioamnionitis rates were equivalent between groups (11.5 vs 8.5%, p = 0.22). However, in this trial, most people in the early ARM group had ARM at 3 cm cervical dilation, making the trial incomparable to the New Zealand context, where amniotomy is frequently performed as soon as feasible, which is at a cervical dilation that permits the insertion of an amniohook instrument (often 1–2 cm of dilation). The findings of these trials are of significant import in the approach to IOL in New Zealand and internationally. Chorioamnionitis is a common occurrence in induced labour and is often a factor in the decision to perform an emergency caesarean birth and then in subsequent surgical infective complications. Chorioamnionitis entails maternal risks and requires treatment for both the mother and the infant during and after birth. Caesareans change the risks for the mother and infant but generally both have a higher risk of complications when a caesarean is performed after the commencement of labour (as occurs in the setting of labour induction). Chorioamnionitis is a risk factor for cerebral palsy and neonatal encephalopathy, even in term foetuses . Trial data regarding timing of amniotomy and infective and operative risk is mixed, showing either a benefit to delaying the procedure or no significant difference in outcomes between groups. There is some indication that induced labour may be slightly shorter with earlier amniotomy. None of the trials has been carried out in New Zealand, and both trials indicating equivalent outcomes in the early and late amniotomy groups were methodologically quite dissimilar to usual labour management in obstetric units in New Zealand. Indeed, the New Zealand national guideline on induction of labour identified timing of amniotomy as a research gap . Irrespective, the procedure is performed on approximately 15,000 people who have IOL in New Zealand each year . Therefore, ascertaining the ideal timing of ARM is important. To answer the question of whether early amniotomy causes an increased rate of chorioamnionitis in the setting of IOL, we are performing a randomised controlled trial. If our study indicates benefit to delaying amniotomy until active labour, this is a low-cost intervention (changing the timing of something that is done anyway) which could provide great benefit for women and babies. The objective is to assess the rate of chorioamnionitis in women undergoing early versus late ARM. The ARM trial is a randomised controlled trial being performed at a single institution in New Zealand, Te Toka Tumai Auckland. Participants undergoing oxytocin IOL are randomised to either ‘Early ARM’ or ‘Late ARM’ in a one-to-one ratio, stratified by parity (parity = 0 or ≥ 1). The primary hypothesis is that people undergoing ‘Late ARM’ will have a lower chance of developing chorioamnionitis than those people undergoing ‘Early ARM’. Study setting {9} This is a single-centre RCT, being carried out at Te Toka Tumai Auckland. The hospital is located in the central Auckland urban catchment and provides both assessment and birthing units as well as a level 3 NICU. Te Toka Tumai Auckland provides care to approximately 6000 people giving birth each year. Eligibility criteria {10} Inclusion criteria: Pregnant people with a live singleton cephalic presentation Planning IOL at ≥ 37 weeks gestation Intact membranes Cardiotocography normal Require oxytocin for induction of labour Exclusion criteria: Previous caesarean birth Major fetal congenital anomaly or known chromosomal abnormality Fetal growth restriction with absent or reversed end-diastolic flos noted on umbilical artery Doppler (fetal growth restriction with abnormal pulsatility index of the middle cerebral artery or umbilical artery or abnormal cerebroplacental ratio is permissable). **Prior criterion was any participant in the OBLIGE study. This study is now completed Who will take informed consent? {26a) Patients identified as eligible are approached for inclusion for the trial when they present to the Women’s Assessment Unit at the Auckland City Hospital. Lead maternity carers (LMCs) who are either midwives or obstetricians, within the community, are aware of the trial and have frequently discussed it with the potential participant prior to their arrival to the unit. Once admitted for IOL, people are approached regarding the trial by either a research team representative or by an obstetric or midwifery team member. The trial procedures and purpose are reviewed with the patient, and a copy of the trial participant information sheet and consent form (PIS/CF) and pamphlet are provided for review. After an opportunity to discuss and ask questions, patients may choose to participate. Written informed consent is required for participation in the trial. Additional consent provisions for collection and use of participant data and biological specimens {26b} N/A. Data will not be utilised in ancillary studies, and biological specimens are not collected. This is a single-centre RCT, being carried out at Te Toka Tumai Auckland. The hospital is located in the central Auckland urban catchment and provides both assessment and birthing units as well as a level 3 NICU. Te Toka Tumai Auckland provides care to approximately 6000 people giving birth each year. Inclusion criteria: Pregnant people with a live singleton cephalic presentation Planning IOL at ≥ 37 weeks gestation Intact membranes Cardiotocography normal Require oxytocin for induction of labour Exclusion criteria: Previous caesarean birth Major fetal congenital anomaly or known chromosomal abnormality Fetal growth restriction with absent or reversed end-diastolic flos noted on umbilical artery Doppler (fetal growth restriction with abnormal pulsatility index of the middle cerebral artery or umbilical artery or abnormal cerebroplacental ratio is permissable). **Prior criterion was any participant in the OBLIGE study. This study is now completed Patients identified as eligible are approached for inclusion for the trial when they present to the Women’s Assessment Unit at the Auckland City Hospital. Lead maternity carers (LMCs) who are either midwives or obstetricians, within the community, are aware of the trial and have frequently discussed it with the potential participant prior to their arrival to the unit. Once admitted for IOL, people are approached regarding the trial by either a research team representative or by an obstetric or midwifery team member. The trial procedures and purpose are reviewed with the patient, and a copy of the trial participant information sheet and consent form (PIS/CF) and pamphlet are provided for review. After an opportunity to discuss and ask questions, patients may choose to participate. Written informed consent is required for participation in the trial. N/A. Data will not be utilised in ancillary studies, and biological specimens are not collected. Explanation for the choice of comparators {6b} The intervention groups in this trial are ‘Early’ and ‘Late’ timing of amniotomy. Standard care in our hospital is early amniotomy. The primary hypothesis is that people undergoing ‘Late ARM’ will have a lower chance of developing chorioamnionitis than those people undergoing ‘Early ARM’. Intervention description {11a} Amniotomy is a common obstetric procedure during which a gloved hand is used to perform a vaginal examination. Once the cervix is located and the examiner’s fingers are placed inside the dilated portion, against the amniotic membrane, a plastic amnihook is advanced along the fingers. The instrument has a sharp ‘hook’ at the end. This end of the implement is utilised to cause a small tear in the amniotic membrane, after which time some of the amniotic fluid is usually felt to be expelled vaginally. ‘Early ARM’ group: ARM is performed either prior to or within 60 min of commencement of oxytocin infusion ‘Late ARM’ group: oxytocin infusion is commenced first, and ARM is performed at ≥ 6 cm cervical dilation or if the participant has been receiving oxytocin infusion for at least 12 h and has not yet reached 6 cm of cervical dilation Criteria for discontinuing or modifying allocated interventions {11b} Participants may have modification to timing of amniotomy as the clinician caring for them sees fit. For example, a person allocated to having a ‘Late’ ARM may have this performed earlier than anticipated if a fetal scalp electrode needs to be placed for a clinical indication. Participants can withdraw from study procedures at any time. This may include withdrawal for clinical procedures (for example requesting an ‘Early ARM’ though they are randomised to ‘Late ARM’) but with consent for continued use of data. Participants may also withdraw the use of their data from the trial. As a contingency for participant withdrawal from both clinical procedures and use of data, recruitment of 500 participants is planned (power calculation for the trial requires 488 participants). Strategies to improve adherence to interventions {11c} Staff on the unit have had in-person education on the study groups. Additionally, written materials are available at all times on the delivery unit specifying the treatment per study arm. Relevant concomitant care permitted or prohibited during the trial {11d} Clinical care during the trial is per hospital clinical guidelines. Provisions for post-trial care {30} None outlined to participants. New Zealand has a public health care system. Treatment injuries (including injuries sustained during birth) are assessed and covered by the Accident Compensation Corporation (ACC) in New Zealand. Outcomes {12} Primary outcome: Secondary outcomes: Participant timeline {13} Person is identified as eligible. ↓ Discussion regarding study with clinician or member of the research team. (after discussion with clinician). Participant information sheet and consent form provided. ↓ Examination that assesses ARM feasible. (either at presentation or after cervical preparation for induction of labour). *People who go into labour from cervical preparation who no longer require oxytocin or with people who undergo SRM prior to randomisation are screen fails and are not randomised. ↓ Randomisation (online randomisation system ). ↓ IOL commences. ( Early ARM group : oxytocin infusion started and ARM within 60 min). ( Late ARM group : oxytocin started and ARM at 6 cm or more cervical dilation or at 12 h of oxytocin infusion). ↓ Participant completes post-induction survey regarding their birth experience. ↓ Data collection baseline demographics, labour and birth, maternal outcomes and neonatal outcomes to discharge from hospital. SPIRIT figure Sample size {14} A power calculation has been performed utilising the OpenEpi ( https://openepi.com ) program comparing the rates of chorioamnionitis between groups. The power calculation is based on the findings from the existing literature indicating a decrease in indicators of maternal infection when ‘Late ARM’ is performed . In order to show a decrease in chorioamnionitis from 9 to 3%, with power set at 80% and a 95% CI, 244 participants per arm are required, or 488 participants in total. The goal is to recruit 500 participants to this trial to account for participant withdrawals of data or loss to follow-up. The additional 12 participants are thought to be sufficient for the purposes of this trial as it would be highly unusual for a labouring patient to be ‘lost to follow-up’ for the primary outcome (which occurs in labour). We anticipate that the choice to withdraw data from analysis will be uncommon. Recruitment {15} All eligible people and their care providers can access information about the study on the trial website ARM.auckland.ac.nz. Trial information sheets and trial pamphlets are available to potential participants when they present for induction as well as through the hospital clinics. There are research employees available to recruit for the trial through the assessment unit at the Auckland City Hospital. Participants have an in-person discussion regarding the trial with a member of staff or a member of the study team. At this time, they are provided with a participant information sheet and consent form to read. They are given time to review the materials and then may elect to participate. Written informed consent is required for participation. The intervention groups in this trial are ‘Early’ and ‘Late’ timing of amniotomy. Standard care in our hospital is early amniotomy. The primary hypothesis is that people undergoing ‘Late ARM’ will have a lower chance of developing chorioamnionitis than those people undergoing ‘Early ARM’. Amniotomy is a common obstetric procedure during which a gloved hand is used to perform a vaginal examination. Once the cervix is located and the examiner’s fingers are placed inside the dilated portion, against the amniotic membrane, a plastic amnihook is advanced along the fingers. The instrument has a sharp ‘hook’ at the end. This end of the implement is utilised to cause a small tear in the amniotic membrane, after which time some of the amniotic fluid is usually felt to be expelled vaginally. ‘Early ARM’ group: ARM is performed either prior to or within 60 min of commencement of oxytocin infusion ‘Late ARM’ group: oxytocin infusion is commenced first, and ARM is performed at ≥ 6 cm cervical dilation or if the participant has been receiving oxytocin infusion for at least 12 h and has not yet reached 6 cm of cervical dilation Participants may have modification to timing of amniotomy as the clinician caring for them sees fit. For example, a person allocated to having a ‘Late’ ARM may have this performed earlier than anticipated if a fetal scalp electrode needs to be placed for a clinical indication. Participants can withdraw from study procedures at any time. This may include withdrawal for clinical procedures (for example requesting an ‘Early ARM’ though they are randomised to ‘Late ARM’) but with consent for continued use of data. Participants may also withdraw the use of their data from the trial. As a contingency for participant withdrawal from both clinical procedures and use of data, recruitment of 500 participants is planned (power calculation for the trial requires 488 participants). Staff on the unit have had in-person education on the study groups. Additionally, written materials are available at all times on the delivery unit specifying the treatment per study arm. Clinical care during the trial is per hospital clinical guidelines. None outlined to participants. New Zealand has a public health care system. Treatment injuries (including injuries sustained during birth) are assessed and covered by the Accident Compensation Corporation (ACC) in New Zealand. Primary outcome: Secondary outcomes: Person is identified as eligible. ↓ Discussion regarding study with clinician or member of the research team. (after discussion with clinician). Participant information sheet and consent form provided. ↓ Examination that assesses ARM feasible. (either at presentation or after cervical preparation for induction of labour). *People who go into labour from cervical preparation who no longer require oxytocin or with people who undergo SRM prior to randomisation are screen fails and are not randomised. ↓ Randomisation (online randomisation system ). ↓ IOL commences. ( Early ARM group : oxytocin infusion started and ARM within 60 min). ( Late ARM group : oxytocin started and ARM at 6 cm or more cervical dilation or at 12 h of oxytocin infusion). ↓ Participant completes post-induction survey regarding their birth experience. ↓ Data collection baseline demographics, labour and birth, maternal outcomes and neonatal outcomes to discharge from hospital. A power calculation has been performed utilising the OpenEpi ( https://openepi.com ) program comparing the rates of chorioamnionitis between groups. The power calculation is based on the findings from the existing literature indicating a decrease in indicators of maternal infection when ‘Late ARM’ is performed . In order to show a decrease in chorioamnionitis from 9 to 3%, with power set at 80% and a 95% CI, 244 participants per arm are required, or 488 participants in total. The goal is to recruit 500 participants to this trial to account for participant withdrawals of data or loss to follow-up. The additional 12 participants are thought to be sufficient for the purposes of this trial as it would be highly unusual for a labouring patient to be ‘lost to follow-up’ for the primary outcome (which occurs in labour). We anticipate that the choice to withdraw data from analysis will be uncommon. All eligible people and their care providers can access information about the study on the trial website ARM.auckland.ac.nz. Trial information sheets and trial pamphlets are available to potential participants when they present for induction as well as through the hospital clinics. There are research employees available to recruit for the trial through the assessment unit at the Auckland City Hospital. Participants have an in-person discussion regarding the trial with a member of staff or a member of the study team. At this time, they are provided with a participant information sheet and consent form to read. They are given time to review the materials and then may elect to participate. Written informed consent is required for participation. Sequence generation {16a} Randomisation occurs via the Liggins institute Clinical Data Research Hub . This service provides an electronic tool for participant screening and randomisation. The site contains a password-protected login, and the screening questions must indicate eligibility for the person to be randomised. Each patient identification number can only be randomised once within a 9-month period; hence, the randomisation cannot be performed more than once during the same pregnancy. Concealment mechanism {16b} While there is allocation concealment until the point of the trial intervention being performed, blinding of participants and clinicians and data extractors is not feasible for this study. Data analysis will be blinded to study allocation. Implementation {16c} The randomisation is computer generated and outsourced to the Liggins Institute Clinical Data Research Hub. The research midwives and investigators who enrol participants do not have access to the randomisation schedule. People can consent to the study at admission to the hospital for induction of labour. At Te Toka Tumai Auckland, IOL patients are initially seen in the Women’s Assessment Unit and cervical preparation undertaken in that location. When they are found to have a favourable cervical examination, they await transfer to delivery unit. Once a participant arrives on delivery unit and is ready to commence induction with oxytocin, the midwife caring for the participant on delivery unit accesses the computerised randomisation site. The study intervention is then assigned electronically. Screen failures Participants in the study will frequently consent to inclusion either prior to or during cervical preparation for induction. Participants who have provided informed consent to be randomised who go on to labour with cervical preparation alone (thereby not requiring oxytocin infusion) or who undergo SRM (spontaneous rupture of membranes) prior to transfer to delivery unit are treated as ‘screen failures’ and are not randomised. While this may potentially result in a high rate of ‘screen failures’, the study was designed to have randomisation only once on the delivery unit thereby avoiding differential treatment of participants prior to oxytocin commencement based on study arm. Randomisation occurs via the Liggins institute Clinical Data Research Hub . This service provides an electronic tool for participant screening and randomisation. The site contains a password-protected login, and the screening questions must indicate eligibility for the person to be randomised. Each patient identification number can only be randomised once within a 9-month period; hence, the randomisation cannot be performed more than once during the same pregnancy. While there is allocation concealment until the point of the trial intervention being performed, blinding of participants and clinicians and data extractors is not feasible for this study. Data analysis will be blinded to study allocation. The randomisation is computer generated and outsourced to the Liggins Institute Clinical Data Research Hub. The research midwives and investigators who enrol participants do not have access to the randomisation schedule. People can consent to the study at admission to the hospital for induction of labour. At Te Toka Tumai Auckland, IOL patients are initially seen in the Women’s Assessment Unit and cervical preparation undertaken in that location. When they are found to have a favourable cervical examination, they await transfer to delivery unit. Once a participant arrives on delivery unit and is ready to commence induction with oxytocin, the midwife caring for the participant on delivery unit accesses the computerised randomisation site. The study intervention is then assigned electronically. Screen failures Participants in the study will frequently consent to inclusion either prior to or during cervical preparation for induction. Participants who have provided informed consent to be randomised who go on to labour with cervical preparation alone (thereby not requiring oxytocin infusion) or who undergo SRM (spontaneous rupture of membranes) prior to transfer to delivery unit are treated as ‘screen failures’ and are not randomised. While this may potentially result in a high rate of ‘screen failures’, the study was designed to have randomisation only once on the delivery unit thereby avoiding differential treatment of participants prior to oxytocin commencement based on study arm. Participants in the study will frequently consent to inclusion either prior to or during cervical preparation for induction. Participants who have provided informed consent to be randomised who go on to labour with cervical preparation alone (thereby not requiring oxytocin infusion) or who undergo SRM (spontaneous rupture of membranes) prior to transfer to delivery unit are treated as ‘screen failures’ and are not randomised. While this may potentially result in a high rate of ‘screen failures’, the study was designed to have randomisation only once on the delivery unit thereby avoiding differential treatment of participants prior to oxytocin commencement based on study arm. Who will be blinded {17a} It is not feasible to blind clinicians or participants to the study intervention. The statistician will be blinded to the study intervention during the analysis. Procedure for unblinding if needed {17b} N/A. This study is not blinded. Data collection and management Plans for assessment and collection of outcomes {18a} Baseline data collection is from chart extraction. Trial data collection is from chart extraction. Research employees performing chart extraction receive in-person training. Data collection is via electronic chart review. In relation to the primary outcome, vitals are reliably charted and the provision of antibiotics only occurs with a written record of the drug order. The secondary outcomes are all part of the standard data entry for all births (including for people not enrolled in the study) on our unit. Survey data is only completed either in person or via phone and is not available from the participant record. Data is entered into a secure dataset. Plans to promote participant retention and complete follow-up {18b} There is 100% follow-up rate for participants for their immediate labour outcomes, as they remain on the unit for all study procedures. If any data is not able to be located within the chart, this data will be reported as missing for that individual outcome and not imputed. There is follow-up for a post-birth survey. This may be performed in person at a clinical encounter related to birth care (hospital or clinics) but is usually performed via phone. Attempts at contact are made three times via phone call for the post-birth survey. Data management {19} Data is entered into a secure dataset with identifiers removed. The dataset is accessible only to research staff. A separate list of participants is kept, only available to research staff. Confidentiality {27} Data is stored in a manner that has identifiers removed. The dataset of participants is kept on a secure server which is only accessible by study staff. All outputs from this research will be presented in an aggregated manner in a way that individual participants would not reasonably be able to be identified. Plans for collection, laboratory evaluation and storage of biological specimens for genetic or molecular analysis in this trial/future use {33} N/A. There are no biological specimens being collected as a part of this study. It is not feasible to blind clinicians or participants to the study intervention. The statistician will be blinded to the study intervention during the analysis. N/A. This study is not blinded. Plans for assessment and collection of outcomes {18a} Baseline data collection is from chart extraction. Trial data collection is from chart extraction. Research employees performing chart extraction receive in-person training. Data collection is via electronic chart review. In relation to the primary outcome, vitals are reliably charted and the provision of antibiotics only occurs with a written record of the drug order. The secondary outcomes are all part of the standard data entry for all births (including for people not enrolled in the study) on our unit. Survey data is only completed either in person or via phone and is not available from the participant record. Data is entered into a secure dataset. Plans to promote participant retention and complete follow-up {18b} There is 100% follow-up rate for participants for their immediate labour outcomes, as they remain on the unit for all study procedures. If any data is not able to be located within the chart, this data will be reported as missing for that individual outcome and not imputed. There is follow-up for a post-birth survey. This may be performed in person at a clinical encounter related to birth care (hospital or clinics) but is usually performed via phone. Attempts at contact are made three times via phone call for the post-birth survey. Data management {19} Data is entered into a secure dataset with identifiers removed. The dataset is accessible only to research staff. A separate list of participants is kept, only available to research staff. Confidentiality {27} Data is stored in a manner that has identifiers removed. The dataset of participants is kept on a secure server which is only accessible by study staff. All outputs from this research will be presented in an aggregated manner in a way that individual participants would not reasonably be able to be identified. Plans for collection, laboratory evaluation and storage of biological specimens for genetic or molecular analysis in this trial/future use {33} N/A. There are no biological specimens being collected as a part of this study. Baseline data collection is from chart extraction. Trial data collection is from chart extraction. Research employees performing chart extraction receive in-person training. Data collection is via electronic chart review. In relation to the primary outcome, vitals are reliably charted and the provision of antibiotics only occurs with a written record of the drug order. The secondary outcomes are all part of the standard data entry for all births (including for people not enrolled in the study) on our unit. Survey data is only completed either in person or via phone and is not available from the participant record. Data is entered into a secure dataset. There is 100% follow-up rate for participants for their immediate labour outcomes, as they remain on the unit for all study procedures. If any data is not able to be located within the chart, this data will be reported as missing for that individual outcome and not imputed. There is follow-up for a post-birth survey. This may be performed in person at a clinical encounter related to birth care (hospital or clinics) but is usually performed via phone. Attempts at contact are made three times via phone call for the post-birth survey. Data is entered into a secure dataset with identifiers removed. The dataset is accessible only to research staff. A separate list of participants is kept, only available to research staff. Data is stored in a manner that has identifiers removed. The dataset of participants is kept on a secure server which is only accessible by study staff. All outputs from this research will be presented in an aggregated manner in a way that individual participants would not reasonably be able to be identified. N/A. There are no biological specimens being collected as a part of this study. Statistical methods for primary and secondary outcomes {20a} Descriptive data will be presented on the study groups. Analyses will follow the principle of intention-to-treat. Missing data will not be imputed and will be presented as missing for these variables. Additionally, a per-protocol analysis will also be undertaken, which will exclude from analysis any participant in the ‘Early’ ARM group who had ARM performed greater than 60 min after commencement of oxytocin infusion and any participant from the ‘Late’ ARM group who had ARM performed before 6 cm dilation or 12 h from oxytocin commencement due to trial procedure withdrawal. Participants who have protocol deviations for clinical indications (for example to place a fetal scalp electrode due to fetal heart rate abnormalities) will remain in the analysis. Primary and secondary outcome analyses will be adjusted for the stratification variable using regression techniques, and outcomes presented as relative risks or mean differences with 95% confidence intervals. A p value of 0.05 will be considered statistically significant. There are multiple secondary outcomes. These will be reported with p values without adjustment for multiplicity but recognised as exploratory. Economic analysis The cost-effectiveness in this study will be determined via calculating length of stay in days for the pregnant person both antepartum and postpartum and the length of stay for the baby. Interim analyses {21b} No interim analyses are planned. The data safety monitoring committee have access to limited outcome data and to severe adverse event data. Methods for additional analyses (e.g. subgroup analyses) {20b} There are no additional analyses presently planned. We plan ongoing input from our Māori investigator and will re-discuss any additional analyses as we review the trial data. Any additional analyses will be added to the statistical analysis plan prior to data lock. Methods in analysis to handle protocol non-adherence and any statistical methods to handle missing data {20c} Data for individual outcomes will not be imputed. Missing data percentages will be reported for each outcome. Plans to give access to the full protocol, participant-level data and statistical code {31c} The full protocol is available on the Australian and New Zealand Clinical Trials Registry. Participants may opt-in to receiving a copy of study results when they consent to participate in this trial. The dataset will not be made publicly available. Descriptive data will be presented on the study groups. Analyses will follow the principle of intention-to-treat. Missing data will not be imputed and will be presented as missing for these variables. Additionally, a per-protocol analysis will also be undertaken, which will exclude from analysis any participant in the ‘Early’ ARM group who had ARM performed greater than 60 min after commencement of oxytocin infusion and any participant from the ‘Late’ ARM group who had ARM performed before 6 cm dilation or 12 h from oxytocin commencement due to trial procedure withdrawal. Participants who have protocol deviations for clinical indications (for example to place a fetal scalp electrode due to fetal heart rate abnormalities) will remain in the analysis. Primary and secondary outcome analyses will be adjusted for the stratification variable using regression techniques, and outcomes presented as relative risks or mean differences with 95% confidence intervals. A p value of 0.05 will be considered statistically significant. There are multiple secondary outcomes. These will be reported with p values without adjustment for multiplicity but recognised as exploratory. The cost-effectiveness in this study will be determined via calculating length of stay in days for the pregnant person both antepartum and postpartum and the length of stay for the baby. No interim analyses are planned. The data safety monitoring committee have access to limited outcome data and to severe adverse event data. There are no additional analyses presently planned. We plan ongoing input from our Māori investigator and will re-discuss any additional analyses as we review the trial data. Any additional analyses will be added to the statistical analysis plan prior to data lock. Data for individual outcomes will not be imputed. Missing data percentages will be reported for each outcome. The full protocol is available on the Australian and New Zealand Clinical Trials Registry. Participants may opt-in to receiving a copy of study results when they consent to participate in this trial. The dataset will not be made publicly available. Composition of the co-ordinating centre and trial steering committee {5d} The trial steering committee is comprised of the investigators, who respond to any recommendations of the DSMC. In addition to these investigators, the trial group also relies on feedback from research midwives hired to perform study procedures. Composition of the data monitoring committee, its role and reporting structure (21a} The DSMC is comprised of a neonatologist (chair) and an obstetrician. The DSMC receives reports each 25% of recruitment detailing recruitment, withdrawals, limited outcome data and severe adverse events. The DSMC makes recommendations to the trial steering committee regarding ongoing trial maintenance, any data requests and any additions to the factors reported. Statistical analyses for reporting to the DSMC are performed confidentially by a non-clinical co-investigator who is not involved in any aspect of trial recruitment, patient care, or data collection. Adverse event reporting and harms {22} Adverse events (AEs) are collected for each participant. The primary outcome, chorioamnionitis, is considered an AE in labour, as are antepartum haemorrhage, postpartum haemorrhage (> 500 mL), umbilical cord prolapse, maternal birth injury, neonatal birth injury and neonatal infections requiring additional care (NICU admission or antibiotic treatment). The occurrence of instrumental vaginal birth and caesarean birth are also collected. These outcomes do not require contemporaneous reporting to the DSMC. These outcomes will be reported in the trial results. Unexpected events identified by the investigators or research staff are recorded with the adverse event data and reviewed by the PI for inclusion as adverse events. Severe adverse events (SAEs) are reported to the PI and also to the DSMC. Each participant’s medical record is checked for the SAEs. Severe adverse events are specified as follows: maternal admission to intensive care unit or equivalent, maternal death, stillbirth, early neonatal death (defined as within 28 days), neonatal encephalopathy. The SAEs will be reported in the trial results. Frequency and plans for auditing trial conduct {23} There are no planned audits. Plans for communicating important protocol amendments to relevant parties (e.g. trial participants, ethical committees) {25} Major changes to the protocol need to be provided as an amendment to the ethics committee, reported to the clinical site, and the participant information materials need to be updated. Dissemination plans {31a} Trial participants have the option of indicating their desire to receive trial results when they consent to participate in the trial. The investigators intend to provide the results via local networks throughout New Zealand as well as internationally via conference presentations and publication in a peer-reviewed journal. The trial steering committee is comprised of the investigators, who respond to any recommendations of the DSMC. In addition to these investigators, the trial group also relies on feedback from research midwives hired to perform study procedures. The DSMC is comprised of a neonatologist (chair) and an obstetrician. The DSMC receives reports each 25% of recruitment detailing recruitment, withdrawals, limited outcome data and severe adverse events. The DSMC makes recommendations to the trial steering committee regarding ongoing trial maintenance, any data requests and any additions to the factors reported. Statistical analyses for reporting to the DSMC are performed confidentially by a non-clinical co-investigator who is not involved in any aspect of trial recruitment, patient care, or data collection. Adverse events (AEs) are collected for each participant. The primary outcome, chorioamnionitis, is considered an AE in labour, as are antepartum haemorrhage, postpartum haemorrhage (> 500 mL), umbilical cord prolapse, maternal birth injury, neonatal birth injury and neonatal infections requiring additional care (NICU admission or antibiotic treatment). The occurrence of instrumental vaginal birth and caesarean birth are also collected. These outcomes do not require contemporaneous reporting to the DSMC. These outcomes will be reported in the trial results. Unexpected events identified by the investigators or research staff are recorded with the adverse event data and reviewed by the PI for inclusion as adverse events. Severe adverse events (SAEs) are reported to the PI and also to the DSMC. Each participant’s medical record is checked for the SAEs. Severe adverse events are specified as follows: maternal admission to intensive care unit or equivalent, maternal death, stillbirth, early neonatal death (defined as within 28 days), neonatal encephalopathy. The SAEs will be reported in the trial results. There are no planned audits. Major changes to the protocol need to be provided as an amendment to the ethics committee, reported to the clinical site, and the participant information materials need to be updated. Trial participants have the option of indicating their desire to receive trial results when they consent to participate in the trial. The investigators intend to provide the results via local networks throughout New Zealand as well as internationally via conference presentations and publication in a peer-reviewed journal. Recruitment for the trial commenced in June of 2021. During the first 18 months of the study, there were significant pandemic-related issues which impaired the recruitment rate. These included ‘silos’ of staff being created and discouragement of mixing across teams, hospital-wide shutdowns of research for weeks–months on three occasions and, lastly, inability of staff to perform recruitment as research was deemed a non-critical academic activity. Recruitment is now possible in the hospital, and staff have returned to study-related roles. Internal barriers have been encountered, mostly related to the attitudes and beliefs of the clinical staff. The standard practice in New Zealand has for decades been the performance of amniotomy as soon as feasible, followed by oxytocin commencement. In spite of inconclusive data regarding duration of induction in the setting of amniotomy and the optimal timing of amniotomy, there is a lack of equipoise amongst the staff and potential recruiters. This has decreased buy-in from some lead maternity carers (LMCs) and their patients. Due to staff beliefs about timing of amniotomy during induction and labour duration, we have chosen hospital days as the element included in the cost-effectiveness analysis. One of the main factors causing care delays on our unit is lack of beds and suboptimal staffing of shifts. For this reason, length of stay is highly relevant locally. A second barrier has been a commonly-held belief amongst both midwives and obstetricians that infusing oxytocin without performing an amniotomy increases the risk of amniotic fluid embolism (AFE). It is unclear what the basis for this belief is, but prior to trial start, the practice of early amniotomy was widely adhered to with the thought that it would prevent AFE. Now that educational sessions and individual discussions regarding the lack of evidence of risk with oxytocin infusion with intact membranes have been undertaken and the trial is underway with no findings to indicate a potential harm of delayed amniotomy, there has been increased momentum with recruitment. Lastly, it has been noted that there have been several protocol violations in the ‘Late ARM’ group in the trial. This is for a variety of reasons, including misunderstanding of the trial protocol, participant request for amniotomy or staff obstetrician/LMC recommendation for amniotomy without a clear indication for this to be performed. This is not entirely unexpected, considering the strong belief amongst some care providers that amniotomy is important to induction of labour and the ability of participants to withdraw from the study at any time. Now that the trial is well underway and there has been ongoing unit and individual education about the trial, most people are treated per trial allocation. The investigators will perform a per-protocol analysis when recruitment is complete and the rate of protocol violations for both participant and caregiver-driven reasons is known. Protocol version number: 2 Date recruitment began: 03 June 2021. Approximate date recruitment will be complete: 30 May 2025.
Computer-Assisted Evaluation of Zygomatic Fracture Outcomes: Case Series and Proposal of a Reproducible Workflow
6ac6280e-e726-4555-82cc-a048b1443c46
11860590
Surgical Procedures, Operative[mh]
Zygomatico-maxillary complex (ZMC) fractures are commonly encountered in maxillofacial surgery practice, accounting for approximately 24% of all facial trauma cases . Injuries leading to ZMC fractures may typically result from physical assaults, falls, road traffic accidents, and sports-related injuries . High-energy trauma may cause comminuted ZMC fractures, resulting in secondary morphological disfigurement. In fact, the zygomatic bone plays a critical role in facial aesthetics and function, determining midfacial width and protrusion, contributing to the contour of the midface, and protecting the orbital contents . Displacement of this bone can lead to facial asymmetry and ophthalmic symptoms, including restricted ocular motility, diplopia, exophthalmos, and enophthalmos . Based on this, the importance of an accurate reduction and stabilization of ZMC fractures may be easily understood. Traditional methods of treating ZMC fractures have focused on either closed or open reduction techniques, with or without internal fixation. Open reduction with internal fixation (ORIF) is the gold standard for treating unstable fractures, allowing for direct visualization of fracture lines and the placement of fixation devices . However, achieving precise anatomical reduction remains challenging, particularly in cases of comminuted fractures . Computed tomography (CT) imaging plays a crucial role in the management of ZMC fractures, providing detailed visualization of bone anatomy, fracture patterns, and associated injuries, allowing simultaneous volumetric 3D rendering of the involved area. As a result, the most commonly used diagnostic system for the diagnosis and classification of ZMC fractures (namely the Zingg classification ) is based on high-resolution CT scan findings. At the same time, post-operative CT data are crucial for evaluating surgical outcomes. Along with the evolution of CT technology, the computer-assisted surgery (CAS) paradigm has progressively spread into the cranio-facial surgery field , with the most extensive body of evidence found in zygomatic implant placement for dental rehabilitation . CAS refers to the use of advanced technologies, such as surgical navigation, computer-aided design (CAD), and computer-aided manufacturing (CAM), to enhance surgical planning, operating technique, and outcome evaluation. CAS enables the integration of patient-specific imaging data to create three-dimensional (3D) models, which allow surgeons to simulate optimal fracture reduction and assess outcomes with unparalleled precision . This technology has shown promise in improving the accuracy of fracture realignment and facilitating reproducible results, especially in complex cases where traditional methods may fall short . Despite this growing adoption, its application in maxillofacial traumatology, particularly in ZMC fractures, remains inconsistent . Standardized workflows for its use are lacking, and its implementation is often confined to clinical research settings or secondary interventions rather than routine practice in most maxillofacial surgery units . By bridging this gap, CAS could offer a transformative tool for achieving better anatomical reduction and symmetry in the routine treatment of ZMC fractures, ultimately addressing the limitations of traditional freehand techniques . In the present investigation, we aimed to evaluate the feasibility of CAS application in assessing the outcomes of zygomatic fracture reduction using a cephalometric coordinates system. As a secondary aim, we evaluated the surgical outcome, comparing the actual post-operative result with an ideal virtual planning obtained from CAS utilization. 2.1. Study Design and Ethical Approval To address the research purposes, we performed a retrospective cohort study including patients treated by the Maxillo-Facial Surgery Unit of the University Hospital “Le Scotte” in Siena, Italy, between January 2017 and June 2024. The study protocol was designed in conformity with the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the Ethical Committee for Clinical Research of the University Hospital of Siena (approval no. 18/2023). All data have been reported according to the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) guidelines ( www.strobe-statement.org ). The study included patients who were diagnosed with ZMC fractures and brought to the attention of the Surgical Unit, undergoing surgery between 1 January 2017 and 30 June 2024. 2.2. Inclusion/Exclusion Criteria and Data Collection Inclusion criteria for the study were the following: (i) adult patients, (ii) ZMC fractures classified as type B or C according to Zingg classification, and (iii) patients treated with surgical ORIF of the fractures . Some patients were excluded from the study for the following reasons: (i) bilateral ZMC fractures; (ii) presence of other concomitant midface fractures; (iii) previous midface fractures; (iv) incomplete radiological, surgical, or follow-up data. The collected data for each patient included the following: (i) personal data, (ii) patient’s history (trauma dynamics, past medical history, occupational background, present and past use of medication), (iii) clinical characteristics of the fracture (classification of the patient according to Zingg , (iv) surgical data (number of accesses, methods of reduction and fixation), (v) pre-operative and post-operative CT scan data, (vi) clinical follow-up data. 2.3. Surgical Approaches All the surgeries were performed under general anesthesia by the same Maxillo-Facial surgeons team. For each patient, either two (infraorbital/transconjunctival + lateral orbital approach) or three (infraorbital/transconjunctival + lateral orbital + intraoral approach) surgical approaches were performed to expose the fractures, depending on the entity of the zygomatic displacement, the number of bone fragments and the surgeon’s preference . 2.4. Anatomical Landmarks In order to compare the position of the zygomatic bone fragments before and after surgery and comparing them to the virtual computer-assisted reduction, five anatomical zygomatic landmarks proposed by Giran were adopted and marked , as well as the orthonormal coordinate system, which was constructed as follows: The Z median plane passing through the midpoint of the fronto-nasal suture (MidM), the midpoint of the posterior clinoid process (MidClp), and the foramen caecum (Fc). The X-plane, perpendicular to the Z-plane, and passing through MidM and MidClp. The Y-plane, constructed perpendicular to Z and X, and passing through MidClp. For each landmark point, the distance between itself and the three orthogonal planes XYZ was measured and compared between pre-operative, post-operative, and computer-assisted zygomatic positions. 2.5. Digital Workflow For each patient, the workflow was the following: (1) pre-operative and post-operative CT scan acquisition; (2) definition of Hounsfield range of interest and CT scan segmentation using Mimics inPrint Software version 3.0 (Materialise N.V., Leuven, Belgium) ; (3) bone fragments isolation using the split tool; (4) mirroring of the contralateral healthy side; (5) definition of zygomatic anatomical landmarks and orbital volume measurement; (6) computer-assisted optimal reposition simulation ; (7) anatomical landmark data extraction and volumetric analysis. 2.6. Statistical Analysis Descriptive statistics, including mean, median, standard deviation (SD), and standard error (SE), were calculated for each anatomical landmark and its respective deviation along the X, Y, and Z axes. The Shapiro–Wilk test was used to assess the normality of data distributions. Paired sample comparisons between the fractured and contralateral (healthy) sides, as well as between post-operative and computer-assisted surgery (CAS)-simulated reductions, were performed using the Wilcoxon signed-rank test for non-parametric data. The statistical analyses were performed using Jamovi software (version 1.6, 2021, open access software available at https://www.jamovi.org , accessed on 24 November 2022). To address the research purposes, we performed a retrospective cohort study including patients treated by the Maxillo-Facial Surgery Unit of the University Hospital “Le Scotte” in Siena, Italy, between January 2017 and June 2024. The study protocol was designed in conformity with the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the Ethical Committee for Clinical Research of the University Hospital of Siena (approval no. 18/2023). All data have been reported according to the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) guidelines ( www.strobe-statement.org ). The study included patients who were diagnosed with ZMC fractures and brought to the attention of the Surgical Unit, undergoing surgery between 1 January 2017 and 30 June 2024. Inclusion criteria for the study were the following: (i) adult patients, (ii) ZMC fractures classified as type B or C according to Zingg classification, and (iii) patients treated with surgical ORIF of the fractures . Some patients were excluded from the study for the following reasons: (i) bilateral ZMC fractures; (ii) presence of other concomitant midface fractures; (iii) previous midface fractures; (iv) incomplete radiological, surgical, or follow-up data. The collected data for each patient included the following: (i) personal data, (ii) patient’s history (trauma dynamics, past medical history, occupational background, present and past use of medication), (iii) clinical characteristics of the fracture (classification of the patient according to Zingg , (iv) surgical data (number of accesses, methods of reduction and fixation), (v) pre-operative and post-operative CT scan data, (vi) clinical follow-up data. All the surgeries were performed under general anesthesia by the same Maxillo-Facial surgeons team. For each patient, either two (infraorbital/transconjunctival + lateral orbital approach) or three (infraorbital/transconjunctival + lateral orbital + intraoral approach) surgical approaches were performed to expose the fractures, depending on the entity of the zygomatic displacement, the number of bone fragments and the surgeon’s preference . In order to compare the position of the zygomatic bone fragments before and after surgery and comparing them to the virtual computer-assisted reduction, five anatomical zygomatic landmarks proposed by Giran were adopted and marked , as well as the orthonormal coordinate system, which was constructed as follows: The Z median plane passing through the midpoint of the fronto-nasal suture (MidM), the midpoint of the posterior clinoid process (MidClp), and the foramen caecum (Fc). The X-plane, perpendicular to the Z-plane, and passing through MidM and MidClp. The Y-plane, constructed perpendicular to Z and X, and passing through MidClp. For each landmark point, the distance between itself and the three orthogonal planes XYZ was measured and compared between pre-operative, post-operative, and computer-assisted zygomatic positions. For each patient, the workflow was the following: (1) pre-operative and post-operative CT scan acquisition; (2) definition of Hounsfield range of interest and CT scan segmentation using Mimics inPrint Software version 3.0 (Materialise N.V., Leuven, Belgium) ; (3) bone fragments isolation using the split tool; (4) mirroring of the contralateral healthy side; (5) definition of zygomatic anatomical landmarks and orbital volume measurement; (6) computer-assisted optimal reposition simulation ; (7) anatomical landmark data extraction and volumetric analysis. Descriptive statistics, including mean, median, standard deviation (SD), and standard error (SE), were calculated for each anatomical landmark and its respective deviation along the X, Y, and Z axes. The Shapiro–Wilk test was used to assess the normality of data distributions. Paired sample comparisons between the fractured and contralateral (healthy) sides, as well as between post-operative and computer-assisted surgery (CAS)-simulated reductions, were performed using the Wilcoxon signed-rank test for non-parametric data. The statistical analyses were performed using Jamovi software (version 1.6, 2021, open access software available at https://www.jamovi.org , accessed on 24 November 2022). 3.1. Study Population Sixteen patients with a surgical ZMC fracture were included in the study. Individual patients’ data are summarized in . Of the included patients, 11 (69%) were men. The average age at the time of the surgical procedure was 48.1 ± 17.6 years. We found a predominance of the affected side being on the right malar bone: 13 cases (81%) vs. 3 (19%). Traffic accidents and accidental falls (three cases, 19%, both) were the most frequent etiologies, followed by sports-related injuries (two cases, 12.5%), with other causes aggregated in three patients (19%). No data were found in five cases (31%). Fractures were classified as Zingg type B (10 patients, 62%) or type C (6 patients, 38%), with a pair distribution of surgical accesses (50% two accesses and 50% three accesses). Nine patients (56%) were treated with 2-point fixation, while seven (44%) had three or more-point fixation methods. The mean follow-up period was 3.8 months (SD: 2.76). For each patient, the digital workflow shown in the Methods section was applied, requiring approximately 90 min for patients classified as Zingg B and 105 min for Zingg C patients to complete the whole procedure and extract the cephalometric data. 3.2. Post-Operative Outcomes: Right-Left Discrepancy and Surgical Correction Versus CAS Optimal Reduction Post-surgical cephalometric landmarks analysis on the three axes is reported in . Paired sample comparisons between the fractured and contralateral (healthy) sides showed no difference along the Z -axis, while in the X -axis results, only FZS ( p = 0.017) indicated residual discrepancies in alignment. On the Y -axis, significant asymmetries were observed for MP ( p = 0.009), FZF ( p = 0.004), and ZT ( p = 0.003). All the comparisons are reported in . Comparisons between post-operative and CAS-simulated reductions along the Z -axis showed statistical significance for FZS ( p = 0.002) and MP ( p = 0.044), indicating measurable differences between surgical reduction and CAS simulation for these landmarks. X -axis discrepancies highlighted notable deviations still for FZS ( p = 0.010) and MP ( p = 0.020). For the Y -axis, FZS ( p = 0.019), FZF ( p = 0.011), and ZT ( p = 0.025) indicated significant differences. All the comparisons are reported in . Among the five anatomical landmarks analyzed across the three planes and two comparisons, the fronto-zygomatic suture (FZS) was the most frequently significant, showing discrepancies in four out of six analyses (Z, X, and Y axes for both CAS and contralateral side comparisons). The fronto-zygomatic foramen (FZF) and maxillary process (MP) were significant in three out of six analyses, particularly along the Y axis for both CAS and contralateral comparisons and the Z axis for CAS. The zygomatic tubercle (ZT) was significant in two out of six analyses, primarily along the Y axis. In contrast, the orbital rim (OR) showed no significant discrepancies in any comparison, suggesting consistent alignment. Considering the five different anatomical landmarks across the three planes and two types of analysis, we found the following characteristics: FZS: significant in four out of six analyses (discrepancies on the Z, X, and Y axes with both CAS and contralateral sides); FZF: significant in three out of six analyses (discrepancies on the Y axis with both CAS and contralateral sides, and discrepancies on the Z axis with CAS); MP: significant in three out of six analyses (discrepancies on the Z axis with CAS and discrepancies on the Y axis with contralateral sides); ZT: significant in two out of six analyses (discrepancies on the Y axis with both CAS and contralateral sides); OR: never significant in any axis or comparison. Sixteen patients with a surgical ZMC fracture were included in the study. Individual patients’ data are summarized in . Of the included patients, 11 (69%) were men. The average age at the time of the surgical procedure was 48.1 ± 17.6 years. We found a predominance of the affected side being on the right malar bone: 13 cases (81%) vs. 3 (19%). Traffic accidents and accidental falls (three cases, 19%, both) were the most frequent etiologies, followed by sports-related injuries (two cases, 12.5%), with other causes aggregated in three patients (19%). No data were found in five cases (31%). Fractures were classified as Zingg type B (10 patients, 62%) or type C (6 patients, 38%), with a pair distribution of surgical accesses (50% two accesses and 50% three accesses). Nine patients (56%) were treated with 2-point fixation, while seven (44%) had three or more-point fixation methods. The mean follow-up period was 3.8 months (SD: 2.76). For each patient, the digital workflow shown in the Methods section was applied, requiring approximately 90 min for patients classified as Zingg B and 105 min for Zingg C patients to complete the whole procedure and extract the cephalometric data. Post-surgical cephalometric landmarks analysis on the three axes is reported in . Paired sample comparisons between the fractured and contralateral (healthy) sides showed no difference along the Z -axis, while in the X -axis results, only FZS ( p = 0.017) indicated residual discrepancies in alignment. On the Y -axis, significant asymmetries were observed for MP ( p = 0.009), FZF ( p = 0.004), and ZT ( p = 0.003). All the comparisons are reported in . Comparisons between post-operative and CAS-simulated reductions along the Z -axis showed statistical significance for FZS ( p = 0.002) and MP ( p = 0.044), indicating measurable differences between surgical reduction and CAS simulation for these landmarks. X -axis discrepancies highlighted notable deviations still for FZS ( p = 0.010) and MP ( p = 0.020). For the Y -axis, FZS ( p = 0.019), FZF ( p = 0.011), and ZT ( p = 0.025) indicated significant differences. All the comparisons are reported in . Among the five anatomical landmarks analyzed across the three planes and two comparisons, the fronto-zygomatic suture (FZS) was the most frequently significant, showing discrepancies in four out of six analyses (Z, X, and Y axes for both CAS and contralateral side comparisons). The fronto-zygomatic foramen (FZF) and maxillary process (MP) were significant in three out of six analyses, particularly along the Y axis for both CAS and contralateral comparisons and the Z axis for CAS. The zygomatic tubercle (ZT) was significant in two out of six analyses, primarily along the Y axis. In contrast, the orbital rim (OR) showed no significant discrepancies in any comparison, suggesting consistent alignment. Considering the five different anatomical landmarks across the three planes and two types of analysis, we found the following characteristics: FZS: significant in four out of six analyses (discrepancies on the Z, X, and Y axes with both CAS and contralateral sides); FZF: significant in three out of six analyses (discrepancies on the Y axis with both CAS and contralateral sides, and discrepancies on the Z axis with CAS); MP: significant in three out of six analyses (discrepancies on the Z axis with CAS and discrepancies on the Y axis with contralateral sides); ZT: significant in two out of six analyses (discrepancies on the Y axis with both CAS and contralateral sides); OR: never significant in any axis or comparison. This study analyzed a cohort of 16 patients undergoing surgical treatment for unilateral ZMC fractures. A male predominance (69%) and a higher prevalence of right-sided fractures (81%) were observed, consistent with the literature on ZMC fractures, which often were due to high-energy trauma (the etiologies of fractures in this study included traffic accidents, accidental falls, and sports-related injuries, with some cases lacking specific data). The application of a digital workflow requiring approximately 90–105 min, depending on fracture complexity, demonstrated the feasibility of integrating CAS into routine clinical practice. The use of CAS facilitated cephalometric measurements and enabled a detailed assessment of post-operative outcomes in relation to the contralateral side and CAS-optimized reduction. Significant right-left discrepancies were identified in the X -axis (FZS) and the Y -axis (MP, FZF, and ZT). While the lack of 25% of data (four cases) for the Y -axis needs to be stressed, these results suggest that achieving perfect symmetry remains challenging in certain regions, even with advanced surgical techniques. Similarly, comparisons between surgical outcomes and CAS-optimized reductions revealed significant discrepancies for at least two landmarks in each plane, highlighting areas where surgical accuracy could improve . In our analysis, FZS was the most frequently significant landmark, highlighting challenges in achieving midfacial symmetry. FZF, MP, and ZT also showed significant discrepancies in multiple axes, which need to be underscored, considering their importance in both aesthetic and functional outcomes. The findings confirmed the need for enhanced precision in these regions, particularly for deeper or lateral landmarks like MP and ZT, where traditional methods often fall short. Conversely, OR (likely benefiting from its straightforward intra-operative exposure and direct fixation options) consistently showed no significant discrepancies, suggesting that conventional surgical techniques reliably address its alignment. The optimal number of fixation points for ZMC fractures remains another subject of ongoing debate . In this study, the fixation points varied depending on the fracture complexity and surgeon preference, with most cases employing two or three-point fixation techniques. Preliminary findings from our analysis suggest that the number of fixation points may influence post-operative symmetry, particularly in regions with greater complexity, such as the infraorbital rim or zygomatic arch. While three-point fixation is often recommended for providing enhanced stability and reducing rotational deformities, two-point fixation can be effective in cases with minimal displacement or simpler fracture patterns. However, discrepancies noted in deeper landmarks, such as MP and ZT, may indicate that additional fixation points could help achieve better alignment in certain cases. Further research is warranted to clarify the relationship between the number of fixation points and long-term functional and aesthetic outcomes. Prospective, controlled studies could provide more definitive guidance and help develop tailored approaches based on fracture characteristics and individual patient needs. The findings of this study supported the growing body of evidence that CAS could enhance the evaluation and management of ZMC fractures. The ZMC is a critical structure in the midface, influencing midfacial width and projection. It has an irregular three-dimensional shape and a complex anatomical structure, forming the lateral wall of the orbit and being surrounded by various muscles. When ZMC fractures occur, the increased risk of functional and aesthetic defects complicates treatment. Thus, the main goal in treating ZMC fractures is to restore the midfacial contour, with the precise reduction being crucial; also, achieving successful reduction largely depends on the surgeon’s experience. The findings of this study support the growing body of evidence that CAS can enhance the pre-operative evaluation and management of ZMC fractures. Previous studies supported the value of computer-assisted navigation systems in improving surgical precision and outcomes. For instance, Bao et al. highlighted the effectiveness of surgical navigation in restoring facial symmetry, particularly in complex fractures . Similarly, He et al. reported that using surface markers during navigation-assisted surgery allowed for a highly accurate reduction in delayed fractures with minimal post-operative asymmetry . In a recent article, Committeri et al. compared the performance in the management of patients with ZMC fractures treated using computer-assisted planning and traditional management. Their results showed that CAS reduced surgical time and post-operative complications but, most importantly, allowed greater intra-operative accuracy . In addition, a newly released investigation by Hassan et al. showed that CAS combined with 3D printing facilitates the anatomically accurate reduction and fixation of the ZMC fractures. In the present study, the use of CAS technology allowed for objective comparisons between pre-operative planning and actual surgical outcomes. By analyzing both two-dimensional measurements and three-dimensional volumetric comparisons, this study suggested that CAS could provide detailed insights into the accuracy of fracture reduction. This technology not only facilitates the precise positioning of bone segments but also enables volumetric assessments that are essential for evaluating outcomes in complex fractures. An important consideration in the evaluation of surgical outcomes using CAS technology is interobserver variability. Despite the standardized workflow employed in this study, differences in landmark identification and segmentation among evaluators may influence the reproducibility of results. This variability underscores the need for automated or semi-automated approaches to reduce subjective bias and improve consistency. For example, the integration of artificial intelligence for automatic landmark detection could standardize measurements and decrease operator dependency. A dedicated study investigating interobserver variability in the application of CAS technology would provide valuable insights into its reproducibility and help refine protocols to minimize potential inconsistencies. Addressing interobserver variability is essential to ensure that CAS is not only a precise but also a reproducible tool for assessing and improving outcomes in ZMC fracture management. An additional application of the proposed computer-assisted workflow lies in the projection and customization of titanium mesh orbital implants required for orbital reconstructions. The use of CAS technology enables precise pre-operative planning and intraoperative execution, particularly in restoring orbital volume and contour. By integrating 3D imaging data, surgeons can accurately assess orbital defects and design patient-specific implants that ensure optimal anatomical fit and stability . This approach is particularly beneficial in cases involving ZMC fractures with concomitant orbital wall involvement, where accurate restoration of the orbital framework is critical to avoid functional complications such as enophthalmos or diplopia. The ability to incorporate the projection of titanium mesh implants into the digital workflow further underscores the versatility of CAS in addressing complex midfacial fractures, offering both aesthetic and functional benefits. Future studies should explore the role of this workflow in improving outcomes for orbital reconstructions, particularly in challenging cases requiring extensive repair. Despite the potential advantages of CAS, challenges remain. The time and cost associated with generating and working with 3D models may limit their widespread adoption in clinical practice, despite new technologies such as AI and deep learning models that could simplify and expand their applicability . In this context, a study by Jiang et al. underscored the potential of CAS. They used modified patient-specific surgical guides to address comminuted ZMC fracture reduction, highlighting that less-experienced surgeons can particularly benefit from CAS despite the high pre-operative effort and skills required. Moreover, as previously reported, there is still variability in the clinical outcomes depending on the surgeon’s experience and the complexity of the fracture . Nonetheless, CAD-CAM technology represents a significant step forward in the pursuit of more predictable and reproducible outcomes in the treatment of ZMC fractures. Strengths, Limitations, and Future Directions The limitations of this study are as follows: (i) the limited sample size (16 patients), which prompts caution in generalizing results; (ii) the exclusion of bilateral fractures or patients with complex midfacial injuries, thus limiting the applicability in more complex cases; (iii) the retrospective design, introducing possible information biases (including potentially limited data on etiology). Additionally, from a technical point of view, the reliance on manual segmentation and landmark identification could introduce inter-observer variability, even with a standardized workflow. Despite these limitations, this study has several strengths, including the use of a reproducible and standardized digital workflow for outcome evaluation in ZMC fractures. The inclusion of patients with different degrees of fracture severity (Zingg type B and C) and fixation methods potentially improved the generalizability of the findings to diverse clinical scenarios. Future research should aim to address these limitations by including a larger, more diverse cohort of patients and exploring the applicability of CAS in bilateral and comminuted fractures. A larger cohort of patients could also provide the opportunity to compare outcomes between groups who underwent different surgical approaches (e.g., number and type of accesses performed, number of plates used), offering valuable information to aid in the development of a symmetry-focused treatment algorithm for zygomatic fractures. Prospective studies could enhance data consistency and further validate the reproducibility of the digital workflow. Advancements in artificial intelligence and machine learning could offer promising opportunities to automate landmark detection and segmentation, potentially reducing observer variability and improving efficiency. Moreover, long-term follow-up studies might assess the functional and aesthetic outcomes, providing a more comprehensive evaluation of CAS benefits. Integration of 3D printing and patient-specific surgical guides may further refine pre-operative planning and intraoperative execution, paving the way for more personalized and precise care in maxillofacial surgery. The limitations of this study are as follows: (i) the limited sample size (16 patients), which prompts caution in generalizing results; (ii) the exclusion of bilateral fractures or patients with complex midfacial injuries, thus limiting the applicability in more complex cases; (iii) the retrospective design, introducing possible information biases (including potentially limited data on etiology). Additionally, from a technical point of view, the reliance on manual segmentation and landmark identification could introduce inter-observer variability, even with a standardized workflow. Despite these limitations, this study has several strengths, including the use of a reproducible and standardized digital workflow for outcome evaluation in ZMC fractures. The inclusion of patients with different degrees of fracture severity (Zingg type B and C) and fixation methods potentially improved the generalizability of the findings to diverse clinical scenarios. Future research should aim to address these limitations by including a larger, more diverse cohort of patients and exploring the applicability of CAS in bilateral and comminuted fractures. A larger cohort of patients could also provide the opportunity to compare outcomes between groups who underwent different surgical approaches (e.g., number and type of accesses performed, number of plates used), offering valuable information to aid in the development of a symmetry-focused treatment algorithm for zygomatic fractures. Prospective studies could enhance data consistency and further validate the reproducibility of the digital workflow. Advancements in artificial intelligence and machine learning could offer promising opportunities to automate landmark detection and segmentation, potentially reducing observer variability and improving efficiency. Moreover, long-term follow-up studies might assess the functional and aesthetic outcomes, providing a more comprehensive evaluation of CAS benefits. Integration of 3D printing and patient-specific surgical guides may further refine pre-operative planning and intraoperative execution, paving the way for more personalized and precise care in maxillofacial surgery. This retrospective study highlights the significant potential of CAS in evaluating the outcomes of ZMC fracture treatment. The integration of CAS enabled precise comparisons between surgical results and optimized virtual reductions, revealing key discrepancies in critical cephalometric landmarks such as the fronto-zygomatic suture, zygo-maxillary point, and zygo-temporal point. These findings emphasize the challenges of achieving ideal symmetry using traditional surgical methods, even with advanced fixation techniques. The standardized digital workflow employed in this study appeared to be reproducible and effective in enhancing the objectivity of outcome evaluation, supporting the adoption of CAS in routine clinical practice for zygomatic fractures. By facilitating both two-dimensional and three-dimensional analyses, CAS offers a valuable tool for surgeons to improve accuracy and achieve better functional and aesthetic outcomes. Despite its advantages, limitations such as high pre-operative time requirements, costs, and a small study cohort must be addressed. The study underscores the need for future research focusing on larger and more diverse patient populations, the inclusion of bilateral or comminuted fractures, and the use of artificial intelligence to streamline and automate processes. As CAS technology continues to evolve, its role in improving predictability, reproducibility, and precision in maxillofacial surgery is expected to expand, paving the way for more personalized and effective patient care.
Accuracy of digital duplication scanning methods for complete dentures
6ef7bc83-f229-49e7-aa2a-6c5318135ea1
11730745
Dentistry[mh]
A master cast was selected based on the American College of Prosthodontists (ACP) Prosthodontics diagnostics index type A classification of residual ridge morphology. The selected master cast was scanned using a desktop scanner (7 Series; Institute Straumann AG). The scanned master cast was used to digitally design a complete denture using CAD software (Dental Studio; 3Shape). The designed denture was imported to open‐source software (Meshmixer; Autodesk Inc.) and segmented into four segments (denture extension, dentition, intaglio, and combined) as reference files. The digitally designed complete denture was exported as an STL file and 3D printed using a digital light processing (DLP) 3D printer (Asiga Max; Asiga, Sydney, Australia) with light‐polymerizing resin (Crown and Bridge, DENTCA, Figure ). The sample size and power analysis for this study were calculated based on a previous study with a similar approach. A sample size of 10 per method would allow 80% power to detect an effect size of 1.325 between methods, based on a two‐sample t ‐test calculation at a two‐sided 5% significance level. The printed complete denture was digitized utilizing different scanning methods, and four study groups ( n = 10/group) were included in this study. Group A: Cone beam computed tomography (CBCT) (Planmeca Viso G7, Planmeca, Helsinki Finland), Group B: Desktop scanner (7 Series; Institute Straumann AG), Group C: Trios intraoral scanner (3 shape Trios 4, Copenhagen, Denmark), and Group D: Virtuo Vivo intraoral scanner (Virtuo Vivo intraoral scanner; Straumann AG). For group A, the denture was scanned 10 times by a CBCT scanner (Planmeca Viso G7, Planmeca, Helsinki Finland). The scanning protocol included 100 kV, 40 mAs, fields of view (FOV) of 80 ×  80 mm, and a voxel size of 139 μm. The DICOM files were reconstructed using open‐source software (InVesalius 3.1, CTI, Brazil) and exported as STL files. For group B, the denture was scanned 10 times by a desktop laser scanner (7 Series; Institute Straumann AG). The scanning protocol involved scanning the cameo surface of the denture first, followed by the intaglio surface. The two scan surfaces were then combined into a single STL file using open‐source CAD software (Meshmixer; Autodesk Inc). The scans were exported as STL files. For groups C and D, the denture was scanned 10 times using a Trios intraoral scanner (3 shape Trios 4, Copenhagen, Denmark) and Virtuo Vivo intraoral scanner (Virtuo Vivo intraoral scanner; Straumann AG), respectively. The scanning protocol for groups C and D involved separately scanning the intaglio, cameo, and border of the complete denture. In the first scan, the intaglio surface of the denture was scanned, and the scan extended beyond the denture border in a wavy motion from posterior to anterior (Figure ). In the second scan, the cameo surface was scanned beyond the denture border, starting from the right maxillary tuberosity to the left maxillary tuberosity buccally and continuing to the occlusal surface from the left maxillary tuberosity to the right maxillary tuberosity. The palate was captured in a wavy motion. Subsequently, the denture border was scanned using the interocclusal record scan in the scanning workflow. The three scans were then superimposed and merged using the overlap of the border with the assistance of open‐source CAD software (Meshmixer; Autodesk Inc). The scans were exported as STL files. The STL files gathered from all four groups were segmented and divided into intaglio, denture extension, dentition, and the combined surfaces in measuring the trueness and precision of each method. The trueness and precision were evaluated by superimposing the four surfaces STL files (intaglio, denture extension, dentition, and the combined file) from the scanned surfaces of the denture to the corresponding reference STL files of the digitally fabricated denture, resulting in four reference files and 160 test files. All the digital file comparisons were made in surface matching software (Geomagic design X; 3D system, Rock Hill, SC) using the best‐fit alignment method. The data collected from the same surface were grouped and analyzed. Deviations between the test and the corresponding reference STL file were shown in mean, standard deviation (SD), and root mean square (RMS). The RMS value represented the absolute value of the dimensional difference between the study samples and the original digitally designed denture. Moreover, the software color mapping feature was utilized to illustrate the digital file accuracy of the scanning method on the four surfaces. F ‐tests were used to compare the groups for differences in within‐group standard deviations among the 10 samples. One‐way ANOVAs were used to compare the groups for mean differences in deviations from the original dentures in RMS while accounting for unequal group variances. Two‐sided 5% significance levels were used for all tests. All statistical analyses were performed using SAS version 9.4 (SAS Institute, Inc., Cary, NC, USA). Descriptive statistics, including mean and standard deviation (SD), are presented in Table . The RMS values for each group and surface are summarized in a boxplot diagram in Figure . The results indicated that the CBCT group had the highest RMS values, while the DS and TIO groups had the lowest RMS values among all groups. To examine trueness, one‐way ANOVA was used for mean differences in deviations from the reference file in RMS. The means of RMS used for the trueness, are shown in Table . Regarding scanning accuracy of the entire denture, the CBCT group showed the highest RMS (0.249 ± 0.020 mm) and lowest trueness compared to the DS (0.124 ± 0.014 mm p < 0.001), TIO (0.131 ± 0.006 mm p < 0.001), and VVIO (0.227 ± 0.020 mm p = 0.017) groups, while DS and TIO showed significantly smaller RMS than VVIO. For the trueness of dentition, denture extension, and intaglio surfaces, the CBCT group also showed the highest mean RMS and lowest trueness among all groups ( p < 0.001). In contrast, the DS and TIO had smaller mean RMS and higher trueness among all groups in all surfaces ( p < 0.001, except VVIO in intaglio surface p > 0.05). To evaluate the precision, F ‐tests were used for differences within‐group standard deviations among 10 samples. Table also shows the standard deviations of the RMS values used to measure precision. When measuring the combined surfaces, TIO had significantly lower variability of RMS within each sample compared to the compared to DS ( p = 0.013), CBCT ( p = 0.001), and VVIO ( p < 0.001) group. This suggests that TIO had the highest level of precision among all groups. For dentition and denture extension surfaces, the DS and TIO groups had similar within‐group variability of RMS ( p > 0.05) and lower (more precise) values than the CBCT and VVIO groups ( p < 0.001). For the intaglio surface, CBCT and TIO groups exhibited similar within‐group variability of RMS ( p = 0.693) and lower (more precise) values than the DS group ( p = 0.037). TIO showed lower within‐group variability of RMS and higher precision in the intaglio surface compared to the VVIO group ( p = 0.022). Color maps of the surface matching differences for each surface are shown in Figure . Areas in blue indicate negative discrepancies, and areas in yellow/red indicate positive discrepancies when comparing the scan samples with the digitally fabricated denture. The area in green indicates surface matching within ±0.10 mm. In the combined surfaces, the CBCT group showed more overall negative discrepancies. The VVIO group showed more positive discrepancies (red) in mid‐palatal in the combined surface. In the intaglio surfaces, TIO showed more green in the alveolar ridge. In the present study, the digital file accuracy of a duplicated complete denture using different scanning methods (CBCT, 7 series desktop scanner, Trios intraoral scanner, and Virtue Vivo intraoral scanner) was investigated. The null hypothesis was rejected suggesting that different scanning techniques affect the trueness and precision of duplicated dentures. CBCT showed the lowest trueness of all groups in the examined surfaces. Moreover, CBCT and VVIO were less precise than DS and TIO in combined, dentition, and denture extension surfaces. On the other hand, TIO had the highest precision among all groups in the combined surface. Digital denture duplication is usually done by using a desktop scanner. This could be due to the higher resolution of the desktop scanner, the object being fixed, natural light being blocked to increase the performance of the scanning, and a larger camera than the intraoral scanner to reduce the number of image superimposition errors. , In general, desktop scanners are more often used by dental laboratories instead of dental offices. As a result, it is often necessary to send dentures to the lab technician to use the desktop scanner to duplicate the denture. Other possible methods of duplicating dentures include using either CBCT or intraoral scanners, which are now widely available in dental offices. With the improvement of CBCT and intraoral scanners, denture scanning by CBCT and intraoral scanners has been reported in the literature. , , When using an intraoral scanner to duplicate a denture, the final accuracy of the datasets could also be influenced by variations in the image acquisition technology used by different optical scanners. Three types of optical scanners were included in the present study; Trios 4 IOS uses confocal microscopy to construct 3D data from a number of two‐dimensional images. The desktop scanner (7 Series) is based on blue laser triangulation technology with free movement of axes, while multiscan imaging technology is used in Virtuo Vivo IOS. Different optical scanners’ scanning accuracy in dental implants and tooth preparation has been reported in several studies. Diker et al. reported that Trios 4 IOS resulted in better trueness compared to Virtuo Vivo IOS in complete arch scanning. Baghani et al. and Chen et al. showed that DS was more accurate than IOS for full arch span. , Çakmak compared the accuracy of in vivo scans of Virtuo Vivo IOS and Trios 3 IOS to 7 Series DS for full arch implants placed in an edentulous mandible. The results showed that the distanced deviation in DS was the highest whereas those in Trios IOS were the lowest. However, to the best of the author's knowledge, there are insufficient studies that compare the accuracy of duplicating complete dentures with IOS and DS. One study by Matsuda et al. performed the geometric accuracy of imaging of a complete denture form using the DS, and the handheld scanner. The results of the study suggested that handheld scanners had lower accuracy than DS. Although the scanners used in the study were different, the result of the present study partially agrees with the finding from Matsuda et al. DS scanner showed higher trueness than VVIO, but it was not significantly different from TIO. However, it should be noted that Matsuda et al. only compared the whole scan surface and did not examine the individual intaglio and polished surfaces or the dentition surface and border extension, which were specifically analyzed in this study. The findings from the current study suggest that DS still showed higher trueness in dentition surface and border extension. The reason for the contrasting outcomes in the present study and Matsuda's research might be attributed to different scanners, denture materials, and experimental procedures. Moreover, not all scanners are capable of scanning both the polished and intaglio surfaces in one single scan. To ensure a standardized scanning process, the present study combined the intaglio surface scan and polished surface scan using CAD software to generate all the duplicate denture STL files. However, the merging of the two scans may have caused some distortion, which could have contributed to the discrepancies observed. Fully guided implant surgery treatment planning for an edentulous patient usually involves a CBCT scan and a dual scan protocol. In dual scan protocol, the denture is scanned with CBCT and merged with the patient's CBCT data set to allow prosthetic‐driven guided implant placement. It was reported that to obtain the maximum accuracy from the CBCT scan, each voxel size has an optimal segmentation threshold. Thus, it would be difficult to standardize the segmentation threshold for all scans. The smoothing procedure of the DICOM file was reported to reduce the size of the actual image by up to 12%. To improve denture duplication accuracy, Guilherme et al. proposed using IOS to scan the existing denture to improve the quality of the guide by reducing CBCT artifacts associated with the dual scan. However, there is no consensus regarding whether IOS or DS is superior in accuracy of duplicating dentures. Al‐Rimawi et al. compared the trueness of a dry human mandible scanned using Trios 3 IOS and four different CBCT machines. The results of the study showed that CBCT‐derived 3D models showed better trueness than IOS. In contrast to the previous study, Michelinakis et al. reported that IOS created superior trueness compared to CBCT when scanning dental casts. The findings of the present study agree with the study conducted by Elkhadem et al. It was found that the CBCT group had higher RMS values compared to the intraoral scanner group. However, the results of this study were inconsistent with studies conducted by Chen et al. and Matsuda et al., which compared the trueness of complete denture duplication using CBCT and desktop scanners. According to their results, CBCT and desktop scanners were found to have similar trueness values. , Chen et al also concluded that CBCT results in comparable trueness across dentition, denture extension, and intaglio surfaced. On the contrary, the present study reported that CBCT showed significantly lower trueness across all four surfaces (dentition, denture extension, intaglio, and combined). These inconsistencies in both studies could be attributed to segmentation threshold, scanner hardware and software differences, and scanning patterns, as well as different analysis software. Color maps in the present study illustrate the discrepancies in the combined surface of the dentition and denture base extension. The CBCT group exhibited more negative discrepancies in the combined surface at the dentition and denture base extension. Both CBCT and VVIO groups showed negative discrepancies at the denture extension border. These findings suggest that negative discrepancies at the dentition may lead to a reduction in the vertical dimension of occlusion. In contrast, discrepancies at the denture extension borders could impact the denture border seal and result in reduced denture retention. In the intaglio surface, all groups showed negative discrepancies at the posterior palate which could affect the posterior palatal seal of the denture. In addition, DS and VVIO groups showed more negative discrepancies extending to the alveolar ridge, which may indicate the need for a denture reline. CBCT group showed more positive discrepancies at the crest of the alveolar ridge, which could indicate the need for a denture adjustment. The limitation of this study is that the ridge morphology that was selected does not take excessive undercuts and long denture extensions into consideration. Protocols followed for denture duplication using scanners were mostly relatable to the ideal case scenarios. Further studies are required to discuss the same in complete dentures with different morphologies and scanning protocols. Besides additive technology, different complete denture fabrication techniques can also be considered in future studies. Although the results of the presented study added evidence on multiple digital denture duplication scanners, with the rapid advancement of scanners and scanning methods, more studies are needed to provide practitioners with clear guidance to achieve optimum patient care. The 7 Series desktop scanner and Trios 4 intraoral scanner can duplicate dentures in higher trueness than CBCT and the Virtuo Vivo intraoral scanner. The Trios 4 intraoral scanner creates more precise results in the combined surfaces than other scanning methods, while the desktop scanner and Trios intraoral scanner are more precise in denture extension surface.
Medical student competence in ophthalmology assessed using the Objective Standardized Clinical Examination
21d439d9-5911-49c3-adf1-7c3eb0fd403b
10391480
Ophthalmology[mh]
Participants Medical students from two classes at a single medical school were included in the study. This project adhered to the Declaration of Helsinki and abided to all regional, national, and international laws of the institution the project was conducted in. Patient consent was sought and obtained. The first group of students consisted of 100 pre-clerkship students (Group A) and the second group comprised 98 clerkship students (Group B). During the regular OSCE administration for each class, the students had to complete one OSCE station involving the following prompt: “A patient presents to you with blurry vision and markedly decreased visual acuity.” This station was used to assess the competency in ophthalmology and was broken down into three parts. Part 1 consisted of history taking, part 2 consisted of coming up with differential diagnoses, and part 3 consisted of the ophthalmic exam . Each student in both groups received the same prompt, and examiners were given a scoring rubric . The students had access to a blank sheet of paper and a pencil, as well as several clinical skills testing tools including a stethoscope, reflex hammer, cotton balls, toothpicks, tuning forks, direct ophthalmoscope, penlight, and a Rosenbaum pocket visual acuity screener. There was no slit lamp available to the students. Analysis Each student’s performance was graded using a seven-point scale for part 1 and part 3 of the OSCE station. Means and standard deviations were used to summarize these data. Part 2 was graded using a binary classification, and proportions were used to summarize these data. Unpaired t -tests were conducted to compare overall clerk and pre-clerk performance in part 1 and part 3. Chi-squared tests were conducted to compare the performance between groups for the overall performance in part 2 and specific sub-questions within parts 1 and 3. Medical students from two classes at a single medical school were included in the study. This project adhered to the Declaration of Helsinki and abided to all regional, national, and international laws of the institution the project was conducted in. Patient consent was sought and obtained. The first group of students consisted of 100 pre-clerkship students (Group A) and the second group comprised 98 clerkship students (Group B). During the regular OSCE administration for each class, the students had to complete one OSCE station involving the following prompt: “A patient presents to you with blurry vision and markedly decreased visual acuity.” This station was used to assess the competency in ophthalmology and was broken down into three parts. Part 1 consisted of history taking, part 2 consisted of coming up with differential diagnoses, and part 3 consisted of the ophthalmic exam . Each student in both groups received the same prompt, and examiners were given a scoring rubric . The students had access to a blank sheet of paper and a pencil, as well as several clinical skills testing tools including a stethoscope, reflex hammer, cotton balls, toothpicks, tuning forks, direct ophthalmoscope, penlight, and a Rosenbaum pocket visual acuity screener. There was no slit lamp available to the students. Each student’s performance was graded using a seven-point scale for part 1 and part 3 of the OSCE station. Means and standard deviations were used to summarize these data. Part 2 was graded using a binary classification, and proportions were used to summarize these data. Unpaired t -tests were conducted to compare overall clerk and pre-clerk performance in part 1 and part 3. Chi-squared tests were conducted to compare the performance between groups for the overall performance in part 2 and specific sub-questions within parts 1 and 3. outlines the section-by-section score breakdown for each group. outlines the final scores and standard deviations by individual section for each group. Overall, the pre-clerks performed worse than the clerks in part 1, the history-taking section (5.03 vs. 5.68, t = 3.52, P < 0.001). There was no significant difference between groups in asking whether the visual loss was monocular or binocular (53.0% vs. 63.2%, Chi-squared = 2.14, P = 0.14) and asking about temporal features of the vision loss (83.0% vs. 87.8%, Chi-squared = 0.90, P = 0.34). Pre-clerks asked about whether the visual loss was transient or persistent significantly less than the clerks (68.0% vs. 85.7%, Chi-squared = 8.71, P < 0.01). Pre-clerks also asked about previous visual acuity significantly less than the clerks (57.0% vs. 80.6%, Chi-squared = 12.83, P < 0.001). However, the pre-clerks did significantly better in ensuring to ask about patient age and pertinent medical history such as hypertension, diabetes, and arthritis (96.0% vs. 71.4%, Chi-squared = 22.0543, P < 0.00001). Overall, the pre-clerks were more often able to identify two or three relevant differential diagnoses than the clerks (92.0% vs. 80.6%, Chi-squared = 5.45, P < 0.05). There was no significant difference between groups in identifying retinal detachment (17.0% vs. 22.4%, Chi-squared = 0.93, P = 0.34), retinal vein occlusion (7.0% vs. 13.3%, Chi-squared = 2.07, P = 0.15), optic neuritis (11.0% vs. 19.4%, Chi-squared = 2.71, P = 0.10), transient ischemic attack (5.0% vs. 6.1%, Chi-squared = 0.12, P = 0.73), stroke (10.0% vs. 17.3%, Chi-squared = 2.27, P = 0.13), or trauma (39.0% vs. 29.6%, Chi-squared = 1.94, P = 0.16) as possible differential diagnoses. Pre-clerks identified diabetic retinopathy (96.0% vs. 70.4%, Chi-squared = 23.34, P < 0.00001) and hypertensive retinopathy (60.0% vs. 14.3%, Chi-squared = 44.19, P < 0.00001) as possible differential diagnoses significantly more often than clerks. Clerks identified glaucoma as a possible diagnosis more often than pre-clerks (9.0% vs. 23.5%, Chi-squared = 7.65, P < 0.01). Overall, pre-clerks performed worse than clerks in the ophthalmic examination section (4.33 vs. 4.70, t = 2.22, P < 0.05). There was no significant difference between pre-clerks and clerks in measuring visual acuity (84.0% vs. 80.6%, Chi-squared = 0.39, P = 0.53) or performing fundoscopy (92.0% vs. 89.8%, Chi-squared = 0.29, P = 0.59). Pre-clerks checked pupillary responses (65.0% vs. 80.6%, Chi-squared = 6.08, P < 0.05) and performed visual field testing (51.0% vs. 68.0%, Chi-squared = 6.98, P < 0.01) less consistently. Pre-clerks more consistently performed an anterior segment exam (22.0% vs. 7.14%, Chi-squared = 8.74, P < 0.01). This study found that there is room for improvement in medical student ophthalmology education. Clerks generally performed better than pre-clerks, except for some specific sub-questions and the ability to identify two or three relevant differential diagnoses. The literature suggests that there is a lack of ophthalmology teaching at the medical school level and a downward trend with regards to didactic ophthalmology teaching and clinical ophthalmology experience. As a result, many non-ophthalmologist physicians may be inadequately trained to deal with the initial management or appropriate referral of basic ophthalmic complaints. Possibly, an increased emphasis on ophthalmology as a fringe specialty within medicine, that is unrelated to most other specialties, has led to a decrease in ophthalmology education. Interestingly, our study suggested that pre-clerks did perform better in generating differential diagnoses, which may suggest that clerkship exposure to ophthalmology is too limited, and students may lose the knowledge they once had due to a lack of exposure. According to the Association of University Professors in Ophthalmology 2004 Survey on Medical Student Teaching, formal ophthalmology rotations in medical school have declined significantly, from 68% in 2000 to 30% in 2004. Clerks performed significantly better than pre-clerks in history taking and performing an ophthalmic examination, although pre-clerks were more often able to identify two or three relevant differential diagnoses. Interestingly, pre-clerks outperformed clerks in specific parts of asking about patient age and pertinent medical history, and they more often identified diabetic retinopathy and hypertensive retinopathy as possible differential diagnosis. These findings could suggest that pre-clerks were more effective at incorporating general history-taking questions such as the past medical history and identifying possibly relevant systemic conditions such as diabetes and hypertension. This may be expected as clerks may have been more focused on specific ocular pathologies. Nonetheless, identifying relevant systemic conditions and asking general history questions such as patient age and pertinent medical history are crucial skills and should be emphasized within clerkship. In the ophthalmic examination section, both groups were able to identify measuring visual acuity and fundoscopy as the important components of the ophthalmic examination; however, a low proportion of students in both groups performed an anterior segment examination. This is likely because clinical skills sessions pertaining to ophthalmology typically only focus on visual acuity and fundoscopy without discussing the anterior segment. By extension, students need to understand exterior ocular anatomy to successfully conduct an anterior segment examination. This is often not the case, although there are several high-quality, easily accessible resources for medical students. Possibly, raising awareness and placing emphasis on using such resources could ameliorate medical student ophthalmology knowledge and performance. In our study, the performance of students in each class was satisfactory, but there was clear room for improvement. Although the means suggest that students are in the satisfactory range, the standard deviation demonstrates a large spread of scores, indicating that several students are performing unsatisfactorily . Future work should investigate whether the spread of scores for other subject areas is similar to what was observed in this study. Students with previous interest in ophthalmology are most likely to seek further knowledge in the field and, conversely, it may be possible that those without interest in the field may be able to avoid ophthalmology content as it is generally not a large component of medical school education. This study was limited by the fact that students from only a single institution during a single OSCE station were included, which may limit the extrapolation of results. Additionally, this study was limited by the possible subjectivity involved in distinguishing the grades in parts 1 and 3 of the OSCE station. Overall, the results of our study indicate that there is a need for improvement in ophthalmology teaching, although it appears that many students are performing at the expected level for ophthalmic content and clinical skills. Notably, there is some evidence from our study which suggests that clerks perform worse than pre-clerks in certain aspects, which reinforces the importance of revisiting concepts, even basic ones, previously covered in the medical school curriculum. Even a single-week ophthalmology rotation has been shown to improve ophthalmic knowledge in clerks. Consistently including ophthalmology-focused stations in OSCEs may encourage students to keep up to date with ocular anatomy, physiology, and examination techniques. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest. Nil. There are no conflicts of interest.
Curcumin Metabolite Tetrahydrocurcumin in the Treatment of Eye Diseases
9e66d3f1-918a-4a11-9a02-3f8a39ef5f4c
7795090
Ophthalmology[mh]
Due to lifestyle changes and increased longevity, increased attention is being paid to the population with visual impairment worldwide . For this purpose, researchers are dedicated to finding specific botanical compounds, especially those from curcumin, widely used throughout history, to intervene in common mechanisms of damage in ocular pathologies . Curcumin is a major polyphenol from Curcuma longa ( Zingiberaceae) , which has a long history as a spice and folk remedy in China and India. In Ayurvedic medicine, curcumin is used to treat eye infections and other diseases. Early Europeans introduced curcumin from Asia to the Western world, and Western medicine practitioners have discovered a surprisingly wide range of beneficial properties of this ancient remedy . Considering these facts, we decided to investigate the effects of pharmaceutical products from curcumin on eye disorders and determine the ophthalmic diseases that can benefit from these products. According to statistics from the World Health Organization (WHO), the total number of individuals of all ages with visual impairment worldwide in 2010 was estimated to have been 285 million, and among these individuals, 39 million were blind. Visual impairment and blindness have become major global health issues, and among the causes of visual impairment, preventable causes account for 80% of the total global burden. The occurrence of visual impairment and blindness in developed countries is lower than that in developing regions, such as sub-Saharan Africa and South Asia . Due to the increasing age of the world’s population and changes in the age distribution worldwide, efforts to improve individuals’ quality of life with visual impairment and blindness and decrease global economic costs related to visual disorders are needed. Therefore, this study assessed the pharmaceutical pathways of curcumin metabolites to eliminate the burden of unnecessary blindness and vision impairment. 2.1. Curcumin: Limitations As a common remedy, curcumin possesses diverse properties, such as its anti-inflammatory and antioxidant capacity. Several studies demonstrated that curcumin could be a wound-healing agent when topically administered. It exerts benefits during the inflammation, proliferation, and remodeling phases in the wound healing process . However, when considering the effects of systemic absorption, access to curcumin’s pharmacological application is limited due to its poor solubility, low gastrointestinal absorption, and fast hepatic and intestinal metabolism . Therefore, modifying curcumin bioavailability is the most important step to promote its beneficial effects against several ocular diseases . In terms of improving the bioavailability of curcumin, in the next paragraphs, we will discuss the following approaches to modify curcumin: delivery formulations and metabolites. 2.2. Modulation of Curcumin: Delivery Formulations Curcumin discovery dates back approximately two centuries when curcumin was discovered from the rhizomes of Curcuma longa of the ginger family . Unfortunately, the hydrophobic polyphenol structure of curcumin significantly decreases its bioavailability. Briefly, there are three barriers to curcumin’s therapeutic potential: its low solubility, low absorption ratio, and fast metabolic rate. Previous studies have suggested the use of micelles, liposomes, phospholipid complexes, microemulsions, nanoemulsions, and several nanostructured carriers as delivery systems for curcumin . First, the hydrophobic curcumin loaded into the core of copolymer micelles could be easy to reconstitute in water. Next, liposomes can also carry hydrophobic curcumin in their phospholipid bilayer vesicles. Finally, nanoemulsions not only have a hydrophobic liquid core but are also stabilized by a surfactant monolayer, which effectively reduces the interfacial tension of the droplets. As can be seen, size and surface properties are critical for the cellular uptake of a substance. Four broad formulation strategies will be discussed below, which have been used to enhance curcumin bioavailability: lipid addition, absorption and dispersion on matrices, particle size reduction, and surface property modulation. 2.2.1. Lipid Addition Early approaches combined existing agents, such as piperine and turmeric oil. Piperine is a major bioactive pepper component that is rapidly absorbed through the gastrointestinal (GI) tract and does not undergo metabolic changes during its absorption from the intestine. The maximum plasma concentration of piperine is attained at approximately 6 h. In 1998, Shoba et al. showed that the presence of piperine, an inhibitor of hepatic and intestinal glucuronidation, significantly improved the curcumin plasma concentration, the extent of curcumin absorption, and the bioavailability of curcumin in both a rat model and humans . Apart from piperine, the addition of lipids to curcumin is another option. The reconstitution of curcumin with turmeric’s noncurcuminoid components had a synergistic effect and substantially increased the efficacy and bioavailability of cumin, with the resulting patented formulation trademarked as BCM-95 ® . The relative bioavailability of BCM-95 ® was shown to be approximately 6.93- and 6.3-fold that of normal curcumin and curcumin–lecithin–piperine, respectively. Consequently, BCM-95 ® has extensive antioxidative applications in various diseases . 2.2.2. Absorption and Dispersion on Matrices Newer formulations have involved the dispersion of curcumin onto various matrices. For instance, a drug with the trade name Meriva ® uses a novel phytosome structure to enhance curcumin’s capacity to cross lipid membranes and reach the systemic circulation. Meriva ® , the microcrystalline cellulose structure combined with soy lecithin phosphatidylcholine, reaches a therapeutic level in the eye when administered at standard dosage. Therefore, the success of this approach implies its significant potential for effective ophthalmic drug therapy. 2.2.3. Particle Size Reduction The most common curcumin formulation uses various techniques to minimize its particle size. Among the resulting pharmaceutical products, nanocrystal conjugates are the most effective . For example, nanocrystal curcumin can be encapsulated in polyethylene glycol (PEG) and/or poly lactic-co-glycolic acid (PLGA), which acts as a carrier for oral delivery of curcumin; the formulation of PLGA-PEG blended nanoparticles increased curcumin bioavailability by over 55-fold . In another study, Gangwar et al. experimented with the conjugation and loading of curcumin with silica nanoparticles to improve its aqueous solubility . Compared with free curcumin, due to its nanoscale size, this formulation of curcumin had a faster dissolution rate, faster cellular uptake in vitro, and improved solubility and stability . 2.2.4. Surface Property Modulation Other critical factors concerning curcumin bioabsorption are its surface properties. Enhancing curcumin’s surface charge and adhesion properties could improve problematic low gastrointestinal absorption by enhancing its contact with the intestinal mucosal epithelium. Since cell membranes are negatively charged, nanocrystal curcumin’s slightly positive surface charge may increase its interaction with epithelium. Research concerning the ophthalmic use of curcumin with altered physicochemical properties within the formulation mentioned above has been conducted, three examples of which follow. First, the preparation of a nanoparticle formulation of curcumin consisting of a thermosensitive ophthalmic nanogel, CUR-CNLC-GEL (a nanogel containing cationic nanostructured lipid carriers), was developed. CUR-CNLC-GEL was confirmed to enhance the corneal permeation and retention capacity of curcumin and increase bioavailability in the aqueous humor in vivo and in vitro . Furthermore, the administration of curcumin incorporated into albumin nanoparticles (Cur-BSA-NPs-Gel) to rabbits in an ophthalmic experiment showed that Cur-BSA-NPs-Gel is considered safe for ophthalmic drug delivery. The Cur-BSA-NPs-Gel formulation significantly increased the effect of curcumin in the aqueous humor . Finally, the role of curcumin-encapsulated methoxypoly(ethylene glycol)-poly(caprolactone) (MePEG-PCL) nanoparticles in the prevention of corneal neovascularization was successfully characterized. Compared with curcumin, MePEG-PCL nanoparticles more significantly suppressed vascular endothelial growth factor (VEGF), inflammatory cytokines, and matrix metalloproteinases (MMPs) to prevent angiogenic sprouting in vitro . 2.3. Modulation of Curcumin: Curcumin Metabolites The solutions, as mentioned above, enhance the bioavailability of curcumin. However, despite its low absorption, curcumin still possesses significant biological effects. Therefore, scientists recently focused on exploring curcumin metabolites, hoping to discover new methods to increase curcumin’s therapeutic potential . Curcumin is mainly metabolized into dihydrocurcumin (DHC), tetrahydrocurcumin (THC), hexahydrocurcumin (HHC), and octahydrocurcumin (OHC), the final form of hydrogenated curcumin. Among the curcumin metabolites, THC–glucuronoside, a conjugated form of curcumin, exhibited the greatest biliary concentration in rats . THC was also shown to be the primary metabolite responsible for curcumin’s various biological properties. Compared with other curcumin metabolites, THC, the major plasma metabolite of curcumin, demonstrates higher solubility at physiological pH; a longer half-life in plasma at 37 °C, and higher antioxidant, anti-inflammatory and anticancer activities. Experiments have been conducted to explore the antioxidant and anti-inflammatory effects of THC and OHC . Compared with curcumin, THC and OHC are more effective in suppressing nuclear factor-κB (NF-κB) and inhibiting the expression of cyclooxygenase 2 (COX-2). In Vitro evidence of these compounds’ antioxidative effects has been reported, but in vivo experiments are still being conducted . Since the field of curcumin metabolites is rather novel, the pathways involved in the effects of THC and OHC in the eye have rarely been published. In this context, the present review addresses the possible mechanism through which THC and OHC interfere with the development of ocular diseases . As a common remedy, curcumin possesses diverse properties, such as its anti-inflammatory and antioxidant capacity. Several studies demonstrated that curcumin could be a wound-healing agent when topically administered. It exerts benefits during the inflammation, proliferation, and remodeling phases in the wound healing process . However, when considering the effects of systemic absorption, access to curcumin’s pharmacological application is limited due to its poor solubility, low gastrointestinal absorption, and fast hepatic and intestinal metabolism . Therefore, modifying curcumin bioavailability is the most important step to promote its beneficial effects against several ocular diseases . In terms of improving the bioavailability of curcumin, in the next paragraphs, we will discuss the following approaches to modify curcumin: delivery formulations and metabolites. Curcumin discovery dates back approximately two centuries when curcumin was discovered from the rhizomes of Curcuma longa of the ginger family . Unfortunately, the hydrophobic polyphenol structure of curcumin significantly decreases its bioavailability. Briefly, there are three barriers to curcumin’s therapeutic potential: its low solubility, low absorption ratio, and fast metabolic rate. Previous studies have suggested the use of micelles, liposomes, phospholipid complexes, microemulsions, nanoemulsions, and several nanostructured carriers as delivery systems for curcumin . First, the hydrophobic curcumin loaded into the core of copolymer micelles could be easy to reconstitute in water. Next, liposomes can also carry hydrophobic curcumin in their phospholipid bilayer vesicles. Finally, nanoemulsions not only have a hydrophobic liquid core but are also stabilized by a surfactant monolayer, which effectively reduces the interfacial tension of the droplets. As can be seen, size and surface properties are critical for the cellular uptake of a substance. Four broad formulation strategies will be discussed below, which have been used to enhance curcumin bioavailability: lipid addition, absorption and dispersion on matrices, particle size reduction, and surface property modulation. 2.2.1. Lipid Addition Early approaches combined existing agents, such as piperine and turmeric oil. Piperine is a major bioactive pepper component that is rapidly absorbed through the gastrointestinal (GI) tract and does not undergo metabolic changes during its absorption from the intestine. The maximum plasma concentration of piperine is attained at approximately 6 h. In 1998, Shoba et al. showed that the presence of piperine, an inhibitor of hepatic and intestinal glucuronidation, significantly improved the curcumin plasma concentration, the extent of curcumin absorption, and the bioavailability of curcumin in both a rat model and humans . Apart from piperine, the addition of lipids to curcumin is another option. The reconstitution of curcumin with turmeric’s noncurcuminoid components had a synergistic effect and substantially increased the efficacy and bioavailability of cumin, with the resulting patented formulation trademarked as BCM-95 ® . The relative bioavailability of BCM-95 ® was shown to be approximately 6.93- and 6.3-fold that of normal curcumin and curcumin–lecithin–piperine, respectively. Consequently, BCM-95 ® has extensive antioxidative applications in various diseases . 2.2.2. Absorption and Dispersion on Matrices Newer formulations have involved the dispersion of curcumin onto various matrices. For instance, a drug with the trade name Meriva ® uses a novel phytosome structure to enhance curcumin’s capacity to cross lipid membranes and reach the systemic circulation. Meriva ® , the microcrystalline cellulose structure combined with soy lecithin phosphatidylcholine, reaches a therapeutic level in the eye when administered at standard dosage. Therefore, the success of this approach implies its significant potential for effective ophthalmic drug therapy. 2.2.3. Particle Size Reduction The most common curcumin formulation uses various techniques to minimize its particle size. Among the resulting pharmaceutical products, nanocrystal conjugates are the most effective . For example, nanocrystal curcumin can be encapsulated in polyethylene glycol (PEG) and/or poly lactic-co-glycolic acid (PLGA), which acts as a carrier for oral delivery of curcumin; the formulation of PLGA-PEG blended nanoparticles increased curcumin bioavailability by over 55-fold . In another study, Gangwar et al. experimented with the conjugation and loading of curcumin with silica nanoparticles to improve its aqueous solubility . Compared with free curcumin, due to its nanoscale size, this formulation of curcumin had a faster dissolution rate, faster cellular uptake in vitro, and improved solubility and stability . 2.2.4. Surface Property Modulation Other critical factors concerning curcumin bioabsorption are its surface properties. Enhancing curcumin’s surface charge and adhesion properties could improve problematic low gastrointestinal absorption by enhancing its contact with the intestinal mucosal epithelium. Since cell membranes are negatively charged, nanocrystal curcumin’s slightly positive surface charge may increase its interaction with epithelium. Research concerning the ophthalmic use of curcumin with altered physicochemical properties within the formulation mentioned above has been conducted, three examples of which follow. First, the preparation of a nanoparticle formulation of curcumin consisting of a thermosensitive ophthalmic nanogel, CUR-CNLC-GEL (a nanogel containing cationic nanostructured lipid carriers), was developed. CUR-CNLC-GEL was confirmed to enhance the corneal permeation and retention capacity of curcumin and increase bioavailability in the aqueous humor in vivo and in vitro . Furthermore, the administration of curcumin incorporated into albumin nanoparticles (Cur-BSA-NPs-Gel) to rabbits in an ophthalmic experiment showed that Cur-BSA-NPs-Gel is considered safe for ophthalmic drug delivery. The Cur-BSA-NPs-Gel formulation significantly increased the effect of curcumin in the aqueous humor . Finally, the role of curcumin-encapsulated methoxypoly(ethylene glycol)-poly(caprolactone) (MePEG-PCL) nanoparticles in the prevention of corneal neovascularization was successfully characterized. Compared with curcumin, MePEG-PCL nanoparticles more significantly suppressed vascular endothelial growth factor (VEGF), inflammatory cytokines, and matrix metalloproteinases (MMPs) to prevent angiogenic sprouting in vitro . Early approaches combined existing agents, such as piperine and turmeric oil. Piperine is a major bioactive pepper component that is rapidly absorbed through the gastrointestinal (GI) tract and does not undergo metabolic changes during its absorption from the intestine. The maximum plasma concentration of piperine is attained at approximately 6 h. In 1998, Shoba et al. showed that the presence of piperine, an inhibitor of hepatic and intestinal glucuronidation, significantly improved the curcumin plasma concentration, the extent of curcumin absorption, and the bioavailability of curcumin in both a rat model and humans . Apart from piperine, the addition of lipids to curcumin is another option. The reconstitution of curcumin with turmeric’s noncurcuminoid components had a synergistic effect and substantially increased the efficacy and bioavailability of cumin, with the resulting patented formulation trademarked as BCM-95 ® . The relative bioavailability of BCM-95 ® was shown to be approximately 6.93- and 6.3-fold that of normal curcumin and curcumin–lecithin–piperine, respectively. Consequently, BCM-95 ® has extensive antioxidative applications in various diseases . Newer formulations have involved the dispersion of curcumin onto various matrices. For instance, a drug with the trade name Meriva ® uses a novel phytosome structure to enhance curcumin’s capacity to cross lipid membranes and reach the systemic circulation. Meriva ® , the microcrystalline cellulose structure combined with soy lecithin phosphatidylcholine, reaches a therapeutic level in the eye when administered at standard dosage. Therefore, the success of this approach implies its significant potential for effective ophthalmic drug therapy. The most common curcumin formulation uses various techniques to minimize its particle size. Among the resulting pharmaceutical products, nanocrystal conjugates are the most effective . For example, nanocrystal curcumin can be encapsulated in polyethylene glycol (PEG) and/or poly lactic-co-glycolic acid (PLGA), which acts as a carrier for oral delivery of curcumin; the formulation of PLGA-PEG blended nanoparticles increased curcumin bioavailability by over 55-fold . In another study, Gangwar et al. experimented with the conjugation and loading of curcumin with silica nanoparticles to improve its aqueous solubility . Compared with free curcumin, due to its nanoscale size, this formulation of curcumin had a faster dissolution rate, faster cellular uptake in vitro, and improved solubility and stability . Other critical factors concerning curcumin bioabsorption are its surface properties. Enhancing curcumin’s surface charge and adhesion properties could improve problematic low gastrointestinal absorption by enhancing its contact with the intestinal mucosal epithelium. Since cell membranes are negatively charged, nanocrystal curcumin’s slightly positive surface charge may increase its interaction with epithelium. Research concerning the ophthalmic use of curcumin with altered physicochemical properties within the formulation mentioned above has been conducted, three examples of which follow. First, the preparation of a nanoparticle formulation of curcumin consisting of a thermosensitive ophthalmic nanogel, CUR-CNLC-GEL (a nanogel containing cationic nanostructured lipid carriers), was developed. CUR-CNLC-GEL was confirmed to enhance the corneal permeation and retention capacity of curcumin and increase bioavailability in the aqueous humor in vivo and in vitro . Furthermore, the administration of curcumin incorporated into albumin nanoparticles (Cur-BSA-NPs-Gel) to rabbits in an ophthalmic experiment showed that Cur-BSA-NPs-Gel is considered safe for ophthalmic drug delivery. The Cur-BSA-NPs-Gel formulation significantly increased the effect of curcumin in the aqueous humor . Finally, the role of curcumin-encapsulated methoxypoly(ethylene glycol)-poly(caprolactone) (MePEG-PCL) nanoparticles in the prevention of corneal neovascularization was successfully characterized. Compared with curcumin, MePEG-PCL nanoparticles more significantly suppressed vascular endothelial growth factor (VEGF), inflammatory cytokines, and matrix metalloproteinases (MMPs) to prevent angiogenic sprouting in vitro . The solutions, as mentioned above, enhance the bioavailability of curcumin. However, despite its low absorption, curcumin still possesses significant biological effects. Therefore, scientists recently focused on exploring curcumin metabolites, hoping to discover new methods to increase curcumin’s therapeutic potential . Curcumin is mainly metabolized into dihydrocurcumin (DHC), tetrahydrocurcumin (THC), hexahydrocurcumin (HHC), and octahydrocurcumin (OHC), the final form of hydrogenated curcumin. Among the curcumin metabolites, THC–glucuronoside, a conjugated form of curcumin, exhibited the greatest biliary concentration in rats . THC was also shown to be the primary metabolite responsible for curcumin’s various biological properties. Compared with other curcumin metabolites, THC, the major plasma metabolite of curcumin, demonstrates higher solubility at physiological pH; a longer half-life in plasma at 37 °C, and higher antioxidant, anti-inflammatory and anticancer activities. Experiments have been conducted to explore the antioxidant and anti-inflammatory effects of THC and OHC . Compared with curcumin, THC and OHC are more effective in suppressing nuclear factor-κB (NF-κB) and inhibiting the expression of cyclooxygenase 2 (COX-2). In Vitro evidence of these compounds’ antioxidative effects has been reported, but in vivo experiments are still being conducted . Since the field of curcumin metabolites is rather novel, the pathways involved in the effects of THC and OHC in the eye have rarely been published. In this context, the present review addresses the possible mechanism through which THC and OHC interfere with the development of ocular diseases . Curcumin has been shown to have considerable potential health benefits in recent studies. A search yielded almost twenty thousand manuscripts on this topic from 2014 to 2019, most of which are related to the use of curcumin derivatives against cancer and cardiovascular diseases . Nevertheless, research concerning the therapeutic potential of curcumin metabolites in ophthalmology is rather limited. Since combating avoidable visual impairment and blindness is important in public health policies throughout the world, the development of THC, a curcumin metabolite, may show promise in ophthalmology. The following review discusses the effects of curcumin in eye conditions. The first section lists the main ophthalmic conditions that affect modern society and the pathological mechanisms of these visual impairment forms. The second section discusses THC pathways, which may be beneficial in the treatment of visual impairments. 3.1. Main Ophthalmic Conditions in Modern Society Aging affects all the eye structures, triggering various eye conditions; consequently, the prevalence of blindness and moderate to severe visual impairment (MSVI) is much greater in elderly individuals . According to statistics reported in 2018, the global population of individuals with blindness in 2015 was approximately 36 million, and slightly more than 216 million individuals had MSVI. At all ages, the leading cause of blindness is cataracts, followed by refractive error, glaucoma, age-related macular degeneration (AMD), and corneal opacity. The causes of MSVI rank as follows: refractive error, cataracts, AMD, glaucoma, and diabetic retinopathy (DR) . Therefore, cataracts, glaucoma, AMD, and DR account for the largest percentage of serious eye disease cases. Before discussing the ophthalmic conditions’ mechanisms, some risk factors involved in this public health phenomenon are discussed. First, the most significant underlying intrinsic factors of cataracts are age and sex (female). Age is crucial for the development of cataracts due to accumulated oxidative stress over time. According to the National Eye Institute (NEI) and its statistics, updated in 2019, the increased risk of cataracts starts around age 40. Poor nutrition and smoking are examples of extrinsic factors associated with cataracts . A cataract is crucial in developing countries, where the majority of cataract cases result in blindness. Due to an aging population in those areas, the cataract incidence has increased and has gained increasing attention . Second, risk factors for open-angle glaucoma are as follows. Ocular factors include increased intraocular pressure, ocular perfusion pressure, and optic disc hemorrhage. Systemic factors, such as systemic hypertension, type 2 diabetes mellitus, and lipid dysregulation may also increase glaucoma risks. Moreover, age, smoking, family history, and genetic factors could also be risk factors . Although glaucoma may occur at any age, a relationship was found between open-angle glaucoma and increasing age due to other age-related diseases. The health deficits include vascular diseases, diabetes, and macular degeneration, which may occur with aging . Third, as for AMD, smoking has the most significant relationship with both wet and dry AMD, aside from age. Other controllable risk factors are diet and cardiovascular health. Genetics and aging are also significantly related to AMD . Statistics of the National Eye Institute suggest that AMD is most common among older white Caucasians, and the prevalence rate increases significantly over age 80. Finally, DR is strongly associated with long diabetes duration and poor glycemic and blood pressure control. Therefore, apart from diabetes, hypertension, and obesity are most significantly associated with DR . The aging global population and rising prevalence of obesity have resulted in the increased prevalence of diabetes and diabetic retinopathy. Besides, with the improvement in diabetes treatment, more patients with diabetes live long enough for DR to develop . Generally, eye diseases are related to aging and several systemic conditions, such as diabetes mellitus and vascular diseases. Therefore, these ophthalmic conditions have a high impact on the global health burden. In the section that follows, the causes and mechanisms of ophthalmic conditions mentioned above will be assessed in order. 3.1.1. Age-Related Cataract A cataract is any type of opacification of the crystalline lens in the eye. A decline in the lens’s optical quality, which is normally clear, can lead to visual symptoms. The main pathogenesis of age-related cataracts is the modification of lens proteins under oxidative stress. Severe modification of the proteins and their unusual interactions lead to inappropriate protein folding and aggregation, causing lens opacification. Accumulation of free radicals in the eye lens is a common initiating factor in cataract formation. The increase in free radicals induces oxidative stress, and lipid peroxidation (LPO) and the aggregation of malfunctioning proteins result from oxidative damage. Babizhayev, Deyev, and Linberg injected LPO products into the vitreous after finding a correlation between the accumulation of a fluorescent end product of LPO and the degree of lens opacity. The injection of LPO products induced cataract, implying that the lens fiber’s peroxide-induced damage may be one of the important triggers that initiate cataractogenesis . As an important structural protein in the lens, α-crystallin helps fold and stabilize other lens proteins. The molecular chaperone function of α-crystallin subunits is to prevent aggregation of proteins under stress conditions. α-Crystallin interacts with proteins that are about to precipitate . The hydrophobic sites of both α-crystallin and partially unfolded proteins integrate; therefore, the aggregation-prone proteins are held in a refolding competent state . However, after various stress factors, especially oxidative stress, deteriorate the chaperone-like function of α-crystallin, it cannot maintain lens transparency, potentially leading to cataract formation. Many of the structural proteins, especially α-crystallin, contain an abundance of -SH groups highly susceptible to oxidative damage. Redox reactions between SH-containing proteins and glutathione result in the accumulation of malfunctioning proteins and a decrease in reduced glutathione (GSH), which accelerates cataract formation . In addition to oxidation, the glycation of lens proteins appears in various types of cataract. Glycation enhances protein unfolding and alters the physicochemical properties and functions of proteins . αB-crystallin (HSPB5) is a chaperone responsible for the alleviation of unfolded proteins. However, excessive accumulation of unfolded proteins could lead to strong ER stress and apoptosis in retinal pigment epithelium (RPE) cells; therefore, αB-crystallin serves as a significant modulator of ER stress-induced cell death. Preliminary evidence has suggested that silencing of αB-crystallin via siRNA results in ER stress, subsequently leading to elevated ROS generation and reduced MnSOD activity; this causes cell damage to human RPE cells. Nevertheless, upregulation of αB-crystallin reversely prevents RPE cells from ER stress-induced apoptosis via inhibition of C/EBP homologous protein (CHOP) and caspase 3 . ER stress is transduced via at least three signaling pathways: the IRE1α-dependent pathway, ATF6-dependent pathway, and PERK-dependent pathway (PERK, protein kinase RNA-like ER kinase). In a study by Berthoud et al., P-PERK immunostaining was significantly higher in mice with nuclear cataracts than in wild-type mice with normal lenses. C/EBP homologous protein (CHOP) transcripts associated with ATF4 levels are also increased in homozygous lenses, suggesting that activation of the PERK-dependent pathway is related to unfolded protein response (UPR) activation in the lens, leading to cataract . Accordingly, the ideal method to correct age-related cataracts is the application of antioxidants. However, current studies have demonstrated that antioxidants have little effect after cataract formation. The only treatment to prevent the progression and development of cataracts is the surgical removal of the cloudy lens. Although medical treatments for cataracts have been administrated, models of antioxidant treatments, such as curcumin metabolites, which will be discussed later, may be applied to intervene before age-related cataract formation. 3.1.2. Glaucoma Glaucoma is a series of progressive optic neuropathies. The degeneration of retinal ganglion cells is mainly attributed to a surge in eye pressure after aqueous humor . Other factors include chemical injury, inflammatory conditions, and changes in vessel density . Therefore, neuroprotection through curcumin metabolism may be a method of preventing glaucoma . Unfortunately, there is little evidence for the mechanisms by which these agents prevent glaucoma progression since the pathophysiological mechanisms of neural damage are not fully understood, and the clinical trials of these agents have not been conclusive. To date, the management of glaucoma is achieved by targeting intraocular pressure . A broad collaborative effort to identify methods for neuroprotection against ophthalmic diseases is ongoing, with curcumin metabolites serving as the primary candidates. 3.1.3. AMD AMD and DR are two major causes of visual impairment due to changes in lifestyle and increased longevity . AMD affects the retina’s macular region, causing progressive loss of vision in the center of the visual field. Changes in early-stage AMD include drusen and abnormalities of the retinal pigment epithelium (RPE), while late-stage AMD is divided into two types: neovascular (also known as the wet form of AMD) and non-neovascular (the dry form of AMD). Several pathways, such as choroidal ischemia and oxidative damage in RPE, have been implicated in AMD’s pathogenesis. In recent years, therapeutic targets have focused on VEGF, a key regulator of vascular growth, and neovascular regression . Several types of retinal cells, including the RPE, astrocytes, Müller cells, vascular endothelium, and ganglion cells, possess VEGF receptors . In tissue under normoxia, retinal cells produce moderate VEGF to support existing blood vessels. However, hypoxia episodes, a crucial stimulus of VEGF gene expression, are related to vascularization development . The activation of VEGFR-2 in cells in the RPE increases vascular permeability through the endothelial nitric oxide synthase (eNOS) pathway, promotes proliferation through the MEK/extracellular signal-regulated kinase (ERK) pathway, and provokes migration through the mitogen-activated protein kinase (MAPK) pathway. These processes result in angiogenesis, which plays an important role in the formation of AMD . Under these circumstances, curcumin metabolites may play a role in AMD remission as antioxidants and VEGF inhibitors. 3.1.4. DR DR is a microvascular complication of diabetes mellitus. Early DR, also called nonproliferative diabetes retinopathy (NPDR), is characterized by weakened retina vessels. Microaneurysms protrude from vessel walls and leak fluid and blood into the retina. Advanced DR, known as proliferative diabetic retinopathy (PDR), is a complication resulting from new blood vessels’ irregular growth. Several pathogenic mechanisms are involved in DR, including VEGF, oxidative stress, ER stress, inflammation, and autophagy. Hyperglycemia leads to the dysfunction of the electron transport chain, causing the accumulation of reactive oxygen species (ROS) in mitochondria. It has been shown that DR occurs through chronic exposure to a high glucose level and the diacylglycerol-protein kinase C (DAG-PKC) molecular signaling pathway. Hyperglycemia promotes the synthesis and activity of DAG (diacylglycerol) and then triggers the PKC’s activation (protein kinase C) pathway. However, hyperglycemia-induced ROS can also induce the PKC pathway. Activated PKC can lead to several vascular abnormalities, such as increases in permeability and angiogenesis . Furthermore, high glucose also activates the p38 MAPK signaling pathway, which initiates inflammation and subsequently induces apoptosis of endothelial cells and pericytes within retinal capillaries . Kowluru et al. reported that diabetic-induced oxidative stresses could epigenetically result in the inactivation of MnSOD, an important enzyme in the removal of superoxide radicals, through elevated H4K20me3, acetyl H3K9, and p65 at the promoter of sod2 (encoding MnSOD) . This leads to DR due to the accumulation of reactive oxidative species and retinal capillary cell apoptosis . Other research suggested that regardless of the type of diabetes, a high glucose condition can induce ROS and impair the balance of DNMT1 expression; however, curcumin can restore the activity and expression of DNMT1, which protects RPE from oxidative stress . Along with ER stress and inflammation pathways, oxidative stress increases autophagy in the retina of diabetic patients. Autophagy acts as a double-edged sword in the modulation of several conditions in the body. Mild autophagic stress can lead to cell survival; however, severe dysregulated autophagy can initiate and deteriorate DR . Additionally, oxidative stress due to hyperglycemia promotes macrophage migration and foam cell formation. Macrophages and foam cells release growth factors, resulting in plaque formation in the retina. THC, the main metabolite of curcumin, may modify impaired platelet function and coagulation abnormalities, which will be discussed in a later paragraph . 3.2. Therapeutic Potential of the THC Pathway This paragraph follows from the previous chapter, which outlined curcumin and its major plasma metabolite, THC, and its mechanisms in the ocular diseases mentioned above. THC is not naturally found in turmeric extract powders but is found in plasma after curcuminoid ingestion. THC is the focus of the present study because it is a major metabolite of curcumin and exhibited activities similar to those of curcumin. In contrast, other identified metabolites, including the conjugates curcumin glucuronide and curcumin sulfate, are less biologically active than curcumin . Moreover, compared with curcumin, THC is more stable and has a longer degradation half-life in buffers at various pH values and plasma . In conclusion, THC has several potential protective benefits for the human body and has thus been the focus of recent studies . 3.3. Effect of THC on Antioxidative Stress The eye is constantly exposed to all types of oxidative stress. ROS are the product of many mitochondria mechanisms and play a vital role in eye pathogenesis. ROS generation is often stimulated by the accumulation of advanced glycation end products (AGEs) in the lens in age-related cataract patients. Moreover, preliminary evidence demonstrated that hyperglycemia and increased ROS could result in downregulation of nicotinamide adenine dinucleotide phosphate (NADPH) and upregulation of NADPH oxidase (NOX) via phosphorylation of NOX. NADPH serves as a ROS scavenger because of glutathione regeneration; NOX is an enzyme that primarily generates different ROS (e.g., superoxide and hydrogen peroxide), which causes a vicious cycle of elevated ROS levels . An experiment conducted by Suryanarayana et al. suggested that among the concentrations tested, 0.002% curcumin had the greatest antioxidant and antiglycation effects. Curcumin at a 0.002% concentration inhibited AGE fluorescence in the lens. This delayed the onset and maturation of age-related cataract . Superoxide dismutase (SOD) and glutathione peroxidase are two of the main superoxide-scavenging systems in the cell. SOD-1 knockout in mice was observed to increase the risk of macular degeneration development . Mice with defective SOD-2 also showed progressive retinal thinning and changes within the photoreceptor layer . The enzymatic activities of SOD and glutathione peroxidase, ROS scavengers, and levels of the oxidative stress indicator malondialdehyde (MSA) were substantially reversed after THC treatment in a previous examination . Significant increases in SOD and glutathione peroxidase strongly imply the antioxidative capacity of THC. A decline in sirtuin-1 (SIRT1) was related to SOD’s reduced levels of SOD since SIRT1 deacetylates SOD . SIRT1 is an NAD-dependent enzyme that deacetylates various substrates, contributing to a range of cellular regulatory mechanisms, such as gene expression, metabolism, and aging. The most important function of SIRT1 is its alleviation of inflammation by inhibiting NF-κB signaling and suppressing oxidative stress. Previous studies have demonstrated the dysfunction of SIRT1 in ocular diseases, and the knockdown of SIRT1 was associated with cataract, glaucoma, AMD, and DR . In contrast, ectopic upregulation of SIRT1 served to protect against oxidative stress-induced impairments in several eye tissues, including the RPE, cornea, and lens. SIRT1 played a role in oxidative stress and was shown to have a significant neuroprotective effect in mice with an optic nerve crush injury . Li et al. examined the correlation between THC and SIRT1 in diabetic cardiomyopathy and discovered that THC administration ameliorates oxidative stress by activating SIRT1. Suppression of the ROS-stimulated TGB-β-1 pathway against the decapentaplegic homolog 3 (Smad3) fibrotic pathway was also promoted by THC treatment . This study provided insight into THC’s potential protective mechanism by activating SIRT1 and thus decreasing ROS, and the effectiveness of THC in ameliorating fibrosis. The above is a brief review of the antioxidative function of THC through the SIRT1 pathway. What follows is an illustration of THC-mediated antioxidative regulation via the nuclear factor erythroid 2-related factor 2 (Nrf2) pathway, hypothesized to be involved in ocular diseases due to its regulation of multiple antioxidant enzymes . Nrf2 physically interacts with Keap1, a negative regulator that limits Nrf2 activity. Under oxidative stress, modified Keap1 releases Nrf2, causing it to bind antioxidant response elements (AREs) in the nucleus. After that, the activation of AREs leads to the transcription of cytoprotective genes, including heme oxygenase-1 (HO-1), against oxidative stress . Nrf2 deficiency rendered cells of the RPE more susceptible to stress and increased damage to these cells. This stress in RPE cells involved increased drusen-like deposits, the accumulation of lipofuscin, choroidal neovascularization, and the sub-RPE deposition of inflammatory proteins . A recent study demonstrated that THC and OHC activate the liver’s Keap1-Nrf2 pathway. THC and OHC can occupy the Nrf2-binding site of Keap1, disturbing the binding of Nrf2 and Keap1 and resulting in Nrf2 translocation into the nucleus. Therefore, THC and OHC enhance the activation of Nrf2-targeted genes, including GCLC, GCLM, NQO1, and HO-1, against oxidative stress . 3.3.1. THC has an Anti-Inflammatory Effect Retinal ischemia is a common cause of vitreous neovascularization in retinal diseases, among which retinal vein occlusion and DR are characterized by retinal ischemia. Vitreous neovascularization is closely associated with local inflammation in the ischemic retina . PGE 2 , one of the most important inflammatory mediators, is synthesized by COX-2. COX-2, however, is usually promoted by essential cytokines, including tumor necrosis factor-alpha (TNF-α), interleukin-1 beta (IL-1β), and interleukin-6 (IL-6), in the immune response to pathogenesis. PGE 2 is a crucial factor in inflammatory diseases, fever, and pain . Therefore, drugs to address various inflammatory diseases can be designed to target the proinflammatory cytokines COX-2 and PGE 2 . The most commonly adopted agents for inflammation in clinical practice are nonsteroidal anti-inflammatory drugs (NSAIDs) and COX-2 inhibitors; nevertheless, some NSAIDs can inhibit COX-1, which causes serious side effects such as gastrointestinal bleeding and ulcers . Hence, it would be extremely desirable to explore selective COX-2 inhibitors as safe and efficient therapeutic agents for inflammatory conditions. Zhang et al. were the first to explore the pathways by which THC and OHC treatment exert an anti-inflammatory effect . Their findings showed that THC and OHC suppressed the levels of TNF-α, IL-1β, and IL-6, demonstrating that THC and OHC could lessen inflammation by reducing the production of proinflammatory mediators. Moreover, the expression of COX-2 and PGE 2 in tissues was eliminated by both THC and OHC in the study, while the expression of COX-1 remained unaffected. 3.3.2. The Anti-VEGF Effect of THC As noted in the previous section, VEGF has recently become a therapeutic target for eye diseases, especially AMD. Choroidal ischemia is one of the causes of AMD. VEGF is the primary cytokine related to angiogenesis, and ischemia induces VEGF expression through hypoxia-inducible factor-1α (HIF-1α) . VEGF not only promotes cell proliferation but also increases vascular permeability through alterations in the phosphorylation of tight junction-related proteins (e.g., zonula occludens protein 1 (ZO1)) . Additionally, VEGF triggers the MAPK signaling pathway, which is responsible for the proliferation of endothelial cells. Furthermore, VEGF-A, a member of VEGF, leads to upregulation of MMPs, which causes degradation of the matrix and increases the permeability of blood vessels . Failure of the blood-retinal barrier (BRB) between RPE cells and retinal capillary endothelial (RCE) cells leads to disorders in the retina. Claesson-Welsh et al. indicated that long-term exposure to high glucose levels is associated with elevated expression of VEGF and induction of vascular permeability. Hence, age-related macular degeneration, inflammation, ischemia, and upregulation of VEGF are highly correlated with retinal diseases, mainly due to vascular permeability changes . The resulting hypoxic environment aggravates VEGF. Therefore, proangiogenic stimulation of VEGF and VEGFR is a therapeutic target. A study conducted by Yoysungnoen et al. shed light on THC and VEGF mechanisms in cervical cancer. Significant reductions in HIF-1 α , VEGF, and VEGFR-2 protein expression, as well as decreased microvascular density, were observed in a cervical cancer-implanted nude mouse model after THC administration . In short, THC dramatically inhibited angiogenesis by downregulating HIF-1 α and the VEGF/VEGFR-2 pathway. 3.3.3. The Neuroprotective Effect of THC The optic nerves are located at the back of the eyes and are injured in people with glaucoma. Gao et al. focused on THC’s potential use as a therapeutic agent for traumatic brain injury in rats. In their study, the expression levels of microtubule-associated protein 1A/1B-light chain 3 (LC3) and Beclin-1 were increased, and those of the ubiquitin-binding protein p62 were significantly decreased after THC treatment, indicating that THC modulated activation of the autophagy pathway, which was shown to play a protective role against brain trauma in rats . Another study by Tyagi et al. illustrated the protective effects of THC associated with its antiautophagic effects . However, in this case, THC’s administration was discovered to block the conversion of LC3-I to LC3-II. Therefore, THC inhibits the autophagy pathway and serves as a neuroprotective factor. Although it often acts as a double-edged sword, autophagy is an important self-protective mechanism in cells. Excessive autophagy can damage cells, but moderate autophagy may aid in neuronal survival because increased autophagic flux boosts the clearance of unnecessary proteins and damaged mitochondria . The role of THC is to balance excess and deficient autophagy. In terms of autophagy in the eye, autophagy inhibition is a promising target for preventing retinal ganglion cell degeneration and axonal degeneration in glaucoma . In summary, the modulation of autophagy through THC administration may be an important neuroprotective intervention in glaucomatous neuropathies. 3.3.4. The Inhibitory Effect of THC on Platelet Aggregation Several coagulation factors are linked to proliferative DR. The β-thromboglobulin concentration was higher, the platelet factor 4 level was significantly increased, and fibrinogen was found to be aggregated in patients with DR compared with controls . β-thromboglobulin and platelet factor 4 are two proteins involved in platelet activation, and fibrinogen, an immediate precursor of fibrin, induces platelet aggregation through the COX pathway. A study comparing the effects of THC and curcuminoids on human platelet aggregation and blood coagulation was conducted by Chapman et al. The results showed that all curcuminoids, with the exclusion of curcumin, reduced platelet aggregation and that THC was the most potent curcuminoid. THC and other curcuminoids were found to act by inhibiting the ability of the COX enzyme to synthesize the formation of proinflammatory thromboxanes . In addition, the effect of curcuminoids in reversing aggregation is mostly due to platelet aggregation induced by arachidonic acid . However, the pathways involved in platelet aggregation are strongly related to different factors, and these experimental antiplatelet effects have not yet been confirmed in vivo due to the low bioavailability of current curcumin derivatives. Therefore, although new formulations of curcumin and THC may hold therapeutic promise, future studies are required to understand the antiplatelet effects of these compounds on the eye . Aging affects all the eye structures, triggering various eye conditions; consequently, the prevalence of blindness and moderate to severe visual impairment (MSVI) is much greater in elderly individuals . According to statistics reported in 2018, the global population of individuals with blindness in 2015 was approximately 36 million, and slightly more than 216 million individuals had MSVI. At all ages, the leading cause of blindness is cataracts, followed by refractive error, glaucoma, age-related macular degeneration (AMD), and corneal opacity. The causes of MSVI rank as follows: refractive error, cataracts, AMD, glaucoma, and diabetic retinopathy (DR) . Therefore, cataracts, glaucoma, AMD, and DR account for the largest percentage of serious eye disease cases. Before discussing the ophthalmic conditions’ mechanisms, some risk factors involved in this public health phenomenon are discussed. First, the most significant underlying intrinsic factors of cataracts are age and sex (female). Age is crucial for the development of cataracts due to accumulated oxidative stress over time. According to the National Eye Institute (NEI) and its statistics, updated in 2019, the increased risk of cataracts starts around age 40. Poor nutrition and smoking are examples of extrinsic factors associated with cataracts . A cataract is crucial in developing countries, where the majority of cataract cases result in blindness. Due to an aging population in those areas, the cataract incidence has increased and has gained increasing attention . Second, risk factors for open-angle glaucoma are as follows. Ocular factors include increased intraocular pressure, ocular perfusion pressure, and optic disc hemorrhage. Systemic factors, such as systemic hypertension, type 2 diabetes mellitus, and lipid dysregulation may also increase glaucoma risks. Moreover, age, smoking, family history, and genetic factors could also be risk factors . Although glaucoma may occur at any age, a relationship was found between open-angle glaucoma and increasing age due to other age-related diseases. The health deficits include vascular diseases, diabetes, and macular degeneration, which may occur with aging . Third, as for AMD, smoking has the most significant relationship with both wet and dry AMD, aside from age. Other controllable risk factors are diet and cardiovascular health. Genetics and aging are also significantly related to AMD . Statistics of the National Eye Institute suggest that AMD is most common among older white Caucasians, and the prevalence rate increases significantly over age 80. Finally, DR is strongly associated with long diabetes duration and poor glycemic and blood pressure control. Therefore, apart from diabetes, hypertension, and obesity are most significantly associated with DR . The aging global population and rising prevalence of obesity have resulted in the increased prevalence of diabetes and diabetic retinopathy. Besides, with the improvement in diabetes treatment, more patients with diabetes live long enough for DR to develop . Generally, eye diseases are related to aging and several systemic conditions, such as diabetes mellitus and vascular diseases. Therefore, these ophthalmic conditions have a high impact on the global health burden. In the section that follows, the causes and mechanisms of ophthalmic conditions mentioned above will be assessed in order. 3.1.1. Age-Related Cataract A cataract is any type of opacification of the crystalline lens in the eye. A decline in the lens’s optical quality, which is normally clear, can lead to visual symptoms. The main pathogenesis of age-related cataracts is the modification of lens proteins under oxidative stress. Severe modification of the proteins and their unusual interactions lead to inappropriate protein folding and aggregation, causing lens opacification. Accumulation of free radicals in the eye lens is a common initiating factor in cataract formation. The increase in free radicals induces oxidative stress, and lipid peroxidation (LPO) and the aggregation of malfunctioning proteins result from oxidative damage. Babizhayev, Deyev, and Linberg injected LPO products into the vitreous after finding a correlation between the accumulation of a fluorescent end product of LPO and the degree of lens opacity. The injection of LPO products induced cataract, implying that the lens fiber’s peroxide-induced damage may be one of the important triggers that initiate cataractogenesis . As an important structural protein in the lens, α-crystallin helps fold and stabilize other lens proteins. The molecular chaperone function of α-crystallin subunits is to prevent aggregation of proteins under stress conditions. α-Crystallin interacts with proteins that are about to precipitate . The hydrophobic sites of both α-crystallin and partially unfolded proteins integrate; therefore, the aggregation-prone proteins are held in a refolding competent state . However, after various stress factors, especially oxidative stress, deteriorate the chaperone-like function of α-crystallin, it cannot maintain lens transparency, potentially leading to cataract formation. Many of the structural proteins, especially α-crystallin, contain an abundance of -SH groups highly susceptible to oxidative damage. Redox reactions between SH-containing proteins and glutathione result in the accumulation of malfunctioning proteins and a decrease in reduced glutathione (GSH), which accelerates cataract formation . In addition to oxidation, the glycation of lens proteins appears in various types of cataract. Glycation enhances protein unfolding and alters the physicochemical properties and functions of proteins . αB-crystallin (HSPB5) is a chaperone responsible for the alleviation of unfolded proteins. However, excessive accumulation of unfolded proteins could lead to strong ER stress and apoptosis in retinal pigment epithelium (RPE) cells; therefore, αB-crystallin serves as a significant modulator of ER stress-induced cell death. Preliminary evidence has suggested that silencing of αB-crystallin via siRNA results in ER stress, subsequently leading to elevated ROS generation and reduced MnSOD activity; this causes cell damage to human RPE cells. Nevertheless, upregulation of αB-crystallin reversely prevents RPE cells from ER stress-induced apoptosis via inhibition of C/EBP homologous protein (CHOP) and caspase 3 . ER stress is transduced via at least three signaling pathways: the IRE1α-dependent pathway, ATF6-dependent pathway, and PERK-dependent pathway (PERK, protein kinase RNA-like ER kinase). In a study by Berthoud et al., P-PERK immunostaining was significantly higher in mice with nuclear cataracts than in wild-type mice with normal lenses. C/EBP homologous protein (CHOP) transcripts associated with ATF4 levels are also increased in homozygous lenses, suggesting that activation of the PERK-dependent pathway is related to unfolded protein response (UPR) activation in the lens, leading to cataract . Accordingly, the ideal method to correct age-related cataracts is the application of antioxidants. However, current studies have demonstrated that antioxidants have little effect after cataract formation. The only treatment to prevent the progression and development of cataracts is the surgical removal of the cloudy lens. Although medical treatments for cataracts have been administrated, models of antioxidant treatments, such as curcumin metabolites, which will be discussed later, may be applied to intervene before age-related cataract formation. 3.1.2. Glaucoma Glaucoma is a series of progressive optic neuropathies. The degeneration of retinal ganglion cells is mainly attributed to a surge in eye pressure after aqueous humor . Other factors include chemical injury, inflammatory conditions, and changes in vessel density . Therefore, neuroprotection through curcumin metabolism may be a method of preventing glaucoma . Unfortunately, there is little evidence for the mechanisms by which these agents prevent glaucoma progression since the pathophysiological mechanisms of neural damage are not fully understood, and the clinical trials of these agents have not been conclusive. To date, the management of glaucoma is achieved by targeting intraocular pressure . A broad collaborative effort to identify methods for neuroprotection against ophthalmic diseases is ongoing, with curcumin metabolites serving as the primary candidates. 3.1.3. AMD AMD and DR are two major causes of visual impairment due to changes in lifestyle and increased longevity . AMD affects the retina’s macular region, causing progressive loss of vision in the center of the visual field. Changes in early-stage AMD include drusen and abnormalities of the retinal pigment epithelium (RPE), while late-stage AMD is divided into two types: neovascular (also known as the wet form of AMD) and non-neovascular (the dry form of AMD). Several pathways, such as choroidal ischemia and oxidative damage in RPE, have been implicated in AMD’s pathogenesis. In recent years, therapeutic targets have focused on VEGF, a key regulator of vascular growth, and neovascular regression . Several types of retinal cells, including the RPE, astrocytes, Müller cells, vascular endothelium, and ganglion cells, possess VEGF receptors . In tissue under normoxia, retinal cells produce moderate VEGF to support existing blood vessels. However, hypoxia episodes, a crucial stimulus of VEGF gene expression, are related to vascularization development . The activation of VEGFR-2 in cells in the RPE increases vascular permeability through the endothelial nitric oxide synthase (eNOS) pathway, promotes proliferation through the MEK/extracellular signal-regulated kinase (ERK) pathway, and provokes migration through the mitogen-activated protein kinase (MAPK) pathway. These processes result in angiogenesis, which plays an important role in the formation of AMD . Under these circumstances, curcumin metabolites may play a role in AMD remission as antioxidants and VEGF inhibitors. 3.1.4. DR DR is a microvascular complication of diabetes mellitus. Early DR, also called nonproliferative diabetes retinopathy (NPDR), is characterized by weakened retina vessels. Microaneurysms protrude from vessel walls and leak fluid and blood into the retina. Advanced DR, known as proliferative diabetic retinopathy (PDR), is a complication resulting from new blood vessels’ irregular growth. Several pathogenic mechanisms are involved in DR, including VEGF, oxidative stress, ER stress, inflammation, and autophagy. Hyperglycemia leads to the dysfunction of the electron transport chain, causing the accumulation of reactive oxygen species (ROS) in mitochondria. It has been shown that DR occurs through chronic exposure to a high glucose level and the diacylglycerol-protein kinase C (DAG-PKC) molecular signaling pathway. Hyperglycemia promotes the synthesis and activity of DAG (diacylglycerol) and then triggers the PKC’s activation (protein kinase C) pathway. However, hyperglycemia-induced ROS can also induce the PKC pathway. Activated PKC can lead to several vascular abnormalities, such as increases in permeability and angiogenesis . Furthermore, high glucose also activates the p38 MAPK signaling pathway, which initiates inflammation and subsequently induces apoptosis of endothelial cells and pericytes within retinal capillaries . Kowluru et al. reported that diabetic-induced oxidative stresses could epigenetically result in the inactivation of MnSOD, an important enzyme in the removal of superoxide radicals, through elevated H4K20me3, acetyl H3K9, and p65 at the promoter of sod2 (encoding MnSOD) . This leads to DR due to the accumulation of reactive oxidative species and retinal capillary cell apoptosis . Other research suggested that regardless of the type of diabetes, a high glucose condition can induce ROS and impair the balance of DNMT1 expression; however, curcumin can restore the activity and expression of DNMT1, which protects RPE from oxidative stress . Along with ER stress and inflammation pathways, oxidative stress increases autophagy in the retina of diabetic patients. Autophagy acts as a double-edged sword in the modulation of several conditions in the body. Mild autophagic stress can lead to cell survival; however, severe dysregulated autophagy can initiate and deteriorate DR . Additionally, oxidative stress due to hyperglycemia promotes macrophage migration and foam cell formation. Macrophages and foam cells release growth factors, resulting in plaque formation in the retina. THC, the main metabolite of curcumin, may modify impaired platelet function and coagulation abnormalities, which will be discussed in a later paragraph . A cataract is any type of opacification of the crystalline lens in the eye. A decline in the lens’s optical quality, which is normally clear, can lead to visual symptoms. The main pathogenesis of age-related cataracts is the modification of lens proteins under oxidative stress. Severe modification of the proteins and their unusual interactions lead to inappropriate protein folding and aggregation, causing lens opacification. Accumulation of free radicals in the eye lens is a common initiating factor in cataract formation. The increase in free radicals induces oxidative stress, and lipid peroxidation (LPO) and the aggregation of malfunctioning proteins result from oxidative damage. Babizhayev, Deyev, and Linberg injected LPO products into the vitreous after finding a correlation between the accumulation of a fluorescent end product of LPO and the degree of lens opacity. The injection of LPO products induced cataract, implying that the lens fiber’s peroxide-induced damage may be one of the important triggers that initiate cataractogenesis . As an important structural protein in the lens, α-crystallin helps fold and stabilize other lens proteins. The molecular chaperone function of α-crystallin subunits is to prevent aggregation of proteins under stress conditions. α-Crystallin interacts with proteins that are about to precipitate . The hydrophobic sites of both α-crystallin and partially unfolded proteins integrate; therefore, the aggregation-prone proteins are held in a refolding competent state . However, after various stress factors, especially oxidative stress, deteriorate the chaperone-like function of α-crystallin, it cannot maintain lens transparency, potentially leading to cataract formation. Many of the structural proteins, especially α-crystallin, contain an abundance of -SH groups highly susceptible to oxidative damage. Redox reactions between SH-containing proteins and glutathione result in the accumulation of malfunctioning proteins and a decrease in reduced glutathione (GSH), which accelerates cataract formation . In addition to oxidation, the glycation of lens proteins appears in various types of cataract. Glycation enhances protein unfolding and alters the physicochemical properties and functions of proteins . αB-crystallin (HSPB5) is a chaperone responsible for the alleviation of unfolded proteins. However, excessive accumulation of unfolded proteins could lead to strong ER stress and apoptosis in retinal pigment epithelium (RPE) cells; therefore, αB-crystallin serves as a significant modulator of ER stress-induced cell death. Preliminary evidence has suggested that silencing of αB-crystallin via siRNA results in ER stress, subsequently leading to elevated ROS generation and reduced MnSOD activity; this causes cell damage to human RPE cells. Nevertheless, upregulation of αB-crystallin reversely prevents RPE cells from ER stress-induced apoptosis via inhibition of C/EBP homologous protein (CHOP) and caspase 3 . ER stress is transduced via at least three signaling pathways: the IRE1α-dependent pathway, ATF6-dependent pathway, and PERK-dependent pathway (PERK, protein kinase RNA-like ER kinase). In a study by Berthoud et al., P-PERK immunostaining was significantly higher in mice with nuclear cataracts than in wild-type mice with normal lenses. C/EBP homologous protein (CHOP) transcripts associated with ATF4 levels are also increased in homozygous lenses, suggesting that activation of the PERK-dependent pathway is related to unfolded protein response (UPR) activation in the lens, leading to cataract . Accordingly, the ideal method to correct age-related cataracts is the application of antioxidants. However, current studies have demonstrated that antioxidants have little effect after cataract formation. The only treatment to prevent the progression and development of cataracts is the surgical removal of the cloudy lens. Although medical treatments for cataracts have been administrated, models of antioxidant treatments, such as curcumin metabolites, which will be discussed later, may be applied to intervene before age-related cataract formation. Glaucoma is a series of progressive optic neuropathies. The degeneration of retinal ganglion cells is mainly attributed to a surge in eye pressure after aqueous humor . Other factors include chemical injury, inflammatory conditions, and changes in vessel density . Therefore, neuroprotection through curcumin metabolism may be a method of preventing glaucoma . Unfortunately, there is little evidence for the mechanisms by which these agents prevent glaucoma progression since the pathophysiological mechanisms of neural damage are not fully understood, and the clinical trials of these agents have not been conclusive. To date, the management of glaucoma is achieved by targeting intraocular pressure . A broad collaborative effort to identify methods for neuroprotection against ophthalmic diseases is ongoing, with curcumin metabolites serving as the primary candidates. AMD and DR are two major causes of visual impairment due to changes in lifestyle and increased longevity . AMD affects the retina’s macular region, causing progressive loss of vision in the center of the visual field. Changes in early-stage AMD include drusen and abnormalities of the retinal pigment epithelium (RPE), while late-stage AMD is divided into two types: neovascular (also known as the wet form of AMD) and non-neovascular (the dry form of AMD). Several pathways, such as choroidal ischemia and oxidative damage in RPE, have been implicated in AMD’s pathogenesis. In recent years, therapeutic targets have focused on VEGF, a key regulator of vascular growth, and neovascular regression . Several types of retinal cells, including the RPE, astrocytes, Müller cells, vascular endothelium, and ganglion cells, possess VEGF receptors . In tissue under normoxia, retinal cells produce moderate VEGF to support existing blood vessels. However, hypoxia episodes, a crucial stimulus of VEGF gene expression, are related to vascularization development . The activation of VEGFR-2 in cells in the RPE increases vascular permeability through the endothelial nitric oxide synthase (eNOS) pathway, promotes proliferation through the MEK/extracellular signal-regulated kinase (ERK) pathway, and provokes migration through the mitogen-activated protein kinase (MAPK) pathway. These processes result in angiogenesis, which plays an important role in the formation of AMD . Under these circumstances, curcumin metabolites may play a role in AMD remission as antioxidants and VEGF inhibitors. DR is a microvascular complication of diabetes mellitus. Early DR, also called nonproliferative diabetes retinopathy (NPDR), is characterized by weakened retina vessels. Microaneurysms protrude from vessel walls and leak fluid and blood into the retina. Advanced DR, known as proliferative diabetic retinopathy (PDR), is a complication resulting from new blood vessels’ irregular growth. Several pathogenic mechanisms are involved in DR, including VEGF, oxidative stress, ER stress, inflammation, and autophagy. Hyperglycemia leads to the dysfunction of the electron transport chain, causing the accumulation of reactive oxygen species (ROS) in mitochondria. It has been shown that DR occurs through chronic exposure to a high glucose level and the diacylglycerol-protein kinase C (DAG-PKC) molecular signaling pathway. Hyperglycemia promotes the synthesis and activity of DAG (diacylglycerol) and then triggers the PKC’s activation (protein kinase C) pathway. However, hyperglycemia-induced ROS can also induce the PKC pathway. Activated PKC can lead to several vascular abnormalities, such as increases in permeability and angiogenesis . Furthermore, high glucose also activates the p38 MAPK signaling pathway, which initiates inflammation and subsequently induces apoptosis of endothelial cells and pericytes within retinal capillaries . Kowluru et al. reported that diabetic-induced oxidative stresses could epigenetically result in the inactivation of MnSOD, an important enzyme in the removal of superoxide radicals, through elevated H4K20me3, acetyl H3K9, and p65 at the promoter of sod2 (encoding MnSOD) . This leads to DR due to the accumulation of reactive oxidative species and retinal capillary cell apoptosis . Other research suggested that regardless of the type of diabetes, a high glucose condition can induce ROS and impair the balance of DNMT1 expression; however, curcumin can restore the activity and expression of DNMT1, which protects RPE from oxidative stress . Along with ER stress and inflammation pathways, oxidative stress increases autophagy in the retina of diabetic patients. Autophagy acts as a double-edged sword in the modulation of several conditions in the body. Mild autophagic stress can lead to cell survival; however, severe dysregulated autophagy can initiate and deteriorate DR . Additionally, oxidative stress due to hyperglycemia promotes macrophage migration and foam cell formation. Macrophages and foam cells release growth factors, resulting in plaque formation in the retina. THC, the main metabolite of curcumin, may modify impaired platelet function and coagulation abnormalities, which will be discussed in a later paragraph . This paragraph follows from the previous chapter, which outlined curcumin and its major plasma metabolite, THC, and its mechanisms in the ocular diseases mentioned above. THC is not naturally found in turmeric extract powders but is found in plasma after curcuminoid ingestion. THC is the focus of the present study because it is a major metabolite of curcumin and exhibited activities similar to those of curcumin. In contrast, other identified metabolites, including the conjugates curcumin glucuronide and curcumin sulfate, are less biologically active than curcumin . Moreover, compared with curcumin, THC is more stable and has a longer degradation half-life in buffers at various pH values and plasma . In conclusion, THC has several potential protective benefits for the human body and has thus been the focus of recent studies . The eye is constantly exposed to all types of oxidative stress. ROS are the product of many mitochondria mechanisms and play a vital role in eye pathogenesis. ROS generation is often stimulated by the accumulation of advanced glycation end products (AGEs) in the lens in age-related cataract patients. Moreover, preliminary evidence demonstrated that hyperglycemia and increased ROS could result in downregulation of nicotinamide adenine dinucleotide phosphate (NADPH) and upregulation of NADPH oxidase (NOX) via phosphorylation of NOX. NADPH serves as a ROS scavenger because of glutathione regeneration; NOX is an enzyme that primarily generates different ROS (e.g., superoxide and hydrogen peroxide), which causes a vicious cycle of elevated ROS levels . An experiment conducted by Suryanarayana et al. suggested that among the concentrations tested, 0.002% curcumin had the greatest antioxidant and antiglycation effects. Curcumin at a 0.002% concentration inhibited AGE fluorescence in the lens. This delayed the onset and maturation of age-related cataract . Superoxide dismutase (SOD) and glutathione peroxidase are two of the main superoxide-scavenging systems in the cell. SOD-1 knockout in mice was observed to increase the risk of macular degeneration development . Mice with defective SOD-2 also showed progressive retinal thinning and changes within the photoreceptor layer . The enzymatic activities of SOD and glutathione peroxidase, ROS scavengers, and levels of the oxidative stress indicator malondialdehyde (MSA) were substantially reversed after THC treatment in a previous examination . Significant increases in SOD and glutathione peroxidase strongly imply the antioxidative capacity of THC. A decline in sirtuin-1 (SIRT1) was related to SOD’s reduced levels of SOD since SIRT1 deacetylates SOD . SIRT1 is an NAD-dependent enzyme that deacetylates various substrates, contributing to a range of cellular regulatory mechanisms, such as gene expression, metabolism, and aging. The most important function of SIRT1 is its alleviation of inflammation by inhibiting NF-κB signaling and suppressing oxidative stress. Previous studies have demonstrated the dysfunction of SIRT1 in ocular diseases, and the knockdown of SIRT1 was associated with cataract, glaucoma, AMD, and DR . In contrast, ectopic upregulation of SIRT1 served to protect against oxidative stress-induced impairments in several eye tissues, including the RPE, cornea, and lens. SIRT1 played a role in oxidative stress and was shown to have a significant neuroprotective effect in mice with an optic nerve crush injury . Li et al. examined the correlation between THC and SIRT1 in diabetic cardiomyopathy and discovered that THC administration ameliorates oxidative stress by activating SIRT1. Suppression of the ROS-stimulated TGB-β-1 pathway against the decapentaplegic homolog 3 (Smad3) fibrotic pathway was also promoted by THC treatment . This study provided insight into THC’s potential protective mechanism by activating SIRT1 and thus decreasing ROS, and the effectiveness of THC in ameliorating fibrosis. The above is a brief review of the antioxidative function of THC through the SIRT1 pathway. What follows is an illustration of THC-mediated antioxidative regulation via the nuclear factor erythroid 2-related factor 2 (Nrf2) pathway, hypothesized to be involved in ocular diseases due to its regulation of multiple antioxidant enzymes . Nrf2 physically interacts with Keap1, a negative regulator that limits Nrf2 activity. Under oxidative stress, modified Keap1 releases Nrf2, causing it to bind antioxidant response elements (AREs) in the nucleus. After that, the activation of AREs leads to the transcription of cytoprotective genes, including heme oxygenase-1 (HO-1), against oxidative stress . Nrf2 deficiency rendered cells of the RPE more susceptible to stress and increased damage to these cells. This stress in RPE cells involved increased drusen-like deposits, the accumulation of lipofuscin, choroidal neovascularization, and the sub-RPE deposition of inflammatory proteins . A recent study demonstrated that THC and OHC activate the liver’s Keap1-Nrf2 pathway. THC and OHC can occupy the Nrf2-binding site of Keap1, disturbing the binding of Nrf2 and Keap1 and resulting in Nrf2 translocation into the nucleus. Therefore, THC and OHC enhance the activation of Nrf2-targeted genes, including GCLC, GCLM, NQO1, and HO-1, against oxidative stress . 3.3.1. THC has an Anti-Inflammatory Effect Retinal ischemia is a common cause of vitreous neovascularization in retinal diseases, among which retinal vein occlusion and DR are characterized by retinal ischemia. Vitreous neovascularization is closely associated with local inflammation in the ischemic retina . PGE 2 , one of the most important inflammatory mediators, is synthesized by COX-2. COX-2, however, is usually promoted by essential cytokines, including tumor necrosis factor-alpha (TNF-α), interleukin-1 beta (IL-1β), and interleukin-6 (IL-6), in the immune response to pathogenesis. PGE 2 is a crucial factor in inflammatory diseases, fever, and pain . Therefore, drugs to address various inflammatory diseases can be designed to target the proinflammatory cytokines COX-2 and PGE 2 . The most commonly adopted agents for inflammation in clinical practice are nonsteroidal anti-inflammatory drugs (NSAIDs) and COX-2 inhibitors; nevertheless, some NSAIDs can inhibit COX-1, which causes serious side effects such as gastrointestinal bleeding and ulcers . Hence, it would be extremely desirable to explore selective COX-2 inhibitors as safe and efficient therapeutic agents for inflammatory conditions. Zhang et al. were the first to explore the pathways by which THC and OHC treatment exert an anti-inflammatory effect . Their findings showed that THC and OHC suppressed the levels of TNF-α, IL-1β, and IL-6, demonstrating that THC and OHC could lessen inflammation by reducing the production of proinflammatory mediators. Moreover, the expression of COX-2 and PGE 2 in tissues was eliminated by both THC and OHC in the study, while the expression of COX-1 remained unaffected. 3.3.2. The Anti-VEGF Effect of THC As noted in the previous section, VEGF has recently become a therapeutic target for eye diseases, especially AMD. Choroidal ischemia is one of the causes of AMD. VEGF is the primary cytokine related to angiogenesis, and ischemia induces VEGF expression through hypoxia-inducible factor-1α (HIF-1α) . VEGF not only promotes cell proliferation but also increases vascular permeability through alterations in the phosphorylation of tight junction-related proteins (e.g., zonula occludens protein 1 (ZO1)) . Additionally, VEGF triggers the MAPK signaling pathway, which is responsible for the proliferation of endothelial cells. Furthermore, VEGF-A, a member of VEGF, leads to upregulation of MMPs, which causes degradation of the matrix and increases the permeability of blood vessels . Failure of the blood-retinal barrier (BRB) between RPE cells and retinal capillary endothelial (RCE) cells leads to disorders in the retina. Claesson-Welsh et al. indicated that long-term exposure to high glucose levels is associated with elevated expression of VEGF and induction of vascular permeability. Hence, age-related macular degeneration, inflammation, ischemia, and upregulation of VEGF are highly correlated with retinal diseases, mainly due to vascular permeability changes . The resulting hypoxic environment aggravates VEGF. Therefore, proangiogenic stimulation of VEGF and VEGFR is a therapeutic target. A study conducted by Yoysungnoen et al. shed light on THC and VEGF mechanisms in cervical cancer. Significant reductions in HIF-1 α , VEGF, and VEGFR-2 protein expression, as well as decreased microvascular density, were observed in a cervical cancer-implanted nude mouse model after THC administration . In short, THC dramatically inhibited angiogenesis by downregulating HIF-1 α and the VEGF/VEGFR-2 pathway. 3.3.3. The Neuroprotective Effect of THC The optic nerves are located at the back of the eyes and are injured in people with glaucoma. Gao et al. focused on THC’s potential use as a therapeutic agent for traumatic brain injury in rats. In their study, the expression levels of microtubule-associated protein 1A/1B-light chain 3 (LC3) and Beclin-1 were increased, and those of the ubiquitin-binding protein p62 were significantly decreased after THC treatment, indicating that THC modulated activation of the autophagy pathway, which was shown to play a protective role against brain trauma in rats . Another study by Tyagi et al. illustrated the protective effects of THC associated with its antiautophagic effects . However, in this case, THC’s administration was discovered to block the conversion of LC3-I to LC3-II. Therefore, THC inhibits the autophagy pathway and serves as a neuroprotective factor. Although it often acts as a double-edged sword, autophagy is an important self-protective mechanism in cells. Excessive autophagy can damage cells, but moderate autophagy may aid in neuronal survival because increased autophagic flux boosts the clearance of unnecessary proteins and damaged mitochondria . The role of THC is to balance excess and deficient autophagy. In terms of autophagy in the eye, autophagy inhibition is a promising target for preventing retinal ganglion cell degeneration and axonal degeneration in glaucoma . In summary, the modulation of autophagy through THC administration may be an important neuroprotective intervention in glaucomatous neuropathies. 3.3.4. The Inhibitory Effect of THC on Platelet Aggregation Several coagulation factors are linked to proliferative DR. The β-thromboglobulin concentration was higher, the platelet factor 4 level was significantly increased, and fibrinogen was found to be aggregated in patients with DR compared with controls . β-thromboglobulin and platelet factor 4 are two proteins involved in platelet activation, and fibrinogen, an immediate precursor of fibrin, induces platelet aggregation through the COX pathway. A study comparing the effects of THC and curcuminoids on human platelet aggregation and blood coagulation was conducted by Chapman et al. The results showed that all curcuminoids, with the exclusion of curcumin, reduced platelet aggregation and that THC was the most potent curcuminoid. THC and other curcuminoids were found to act by inhibiting the ability of the COX enzyme to synthesize the formation of proinflammatory thromboxanes . In addition, the effect of curcuminoids in reversing aggregation is mostly due to platelet aggregation induced by arachidonic acid . However, the pathways involved in platelet aggregation are strongly related to different factors, and these experimental antiplatelet effects have not yet been confirmed in vivo due to the low bioavailability of current curcumin derivatives. Therefore, although new formulations of curcumin and THC may hold therapeutic promise, future studies are required to understand the antiplatelet effects of these compounds on the eye . Retinal ischemia is a common cause of vitreous neovascularization in retinal diseases, among which retinal vein occlusion and DR are characterized by retinal ischemia. Vitreous neovascularization is closely associated with local inflammation in the ischemic retina . PGE 2 , one of the most important inflammatory mediators, is synthesized by COX-2. COX-2, however, is usually promoted by essential cytokines, including tumor necrosis factor-alpha (TNF-α), interleukin-1 beta (IL-1β), and interleukin-6 (IL-6), in the immune response to pathogenesis. PGE 2 is a crucial factor in inflammatory diseases, fever, and pain . Therefore, drugs to address various inflammatory diseases can be designed to target the proinflammatory cytokines COX-2 and PGE 2 . The most commonly adopted agents for inflammation in clinical practice are nonsteroidal anti-inflammatory drugs (NSAIDs) and COX-2 inhibitors; nevertheless, some NSAIDs can inhibit COX-1, which causes serious side effects such as gastrointestinal bleeding and ulcers . Hence, it would be extremely desirable to explore selective COX-2 inhibitors as safe and efficient therapeutic agents for inflammatory conditions. Zhang et al. were the first to explore the pathways by which THC and OHC treatment exert an anti-inflammatory effect . Their findings showed that THC and OHC suppressed the levels of TNF-α, IL-1β, and IL-6, demonstrating that THC and OHC could lessen inflammation by reducing the production of proinflammatory mediators. Moreover, the expression of COX-2 and PGE 2 in tissues was eliminated by both THC and OHC in the study, while the expression of COX-1 remained unaffected. As noted in the previous section, VEGF has recently become a therapeutic target for eye diseases, especially AMD. Choroidal ischemia is one of the causes of AMD. VEGF is the primary cytokine related to angiogenesis, and ischemia induces VEGF expression through hypoxia-inducible factor-1α (HIF-1α) . VEGF not only promotes cell proliferation but also increases vascular permeability through alterations in the phosphorylation of tight junction-related proteins (e.g., zonula occludens protein 1 (ZO1)) . Additionally, VEGF triggers the MAPK signaling pathway, which is responsible for the proliferation of endothelial cells. Furthermore, VEGF-A, a member of VEGF, leads to upregulation of MMPs, which causes degradation of the matrix and increases the permeability of blood vessels . Failure of the blood-retinal barrier (BRB) between RPE cells and retinal capillary endothelial (RCE) cells leads to disorders in the retina. Claesson-Welsh et al. indicated that long-term exposure to high glucose levels is associated with elevated expression of VEGF and induction of vascular permeability. Hence, age-related macular degeneration, inflammation, ischemia, and upregulation of VEGF are highly correlated with retinal diseases, mainly due to vascular permeability changes . The resulting hypoxic environment aggravates VEGF. Therefore, proangiogenic stimulation of VEGF and VEGFR is a therapeutic target. A study conducted by Yoysungnoen et al. shed light on THC and VEGF mechanisms in cervical cancer. Significant reductions in HIF-1 α , VEGF, and VEGFR-2 protein expression, as well as decreased microvascular density, were observed in a cervical cancer-implanted nude mouse model after THC administration . In short, THC dramatically inhibited angiogenesis by downregulating HIF-1 α and the VEGF/VEGFR-2 pathway. The optic nerves are located at the back of the eyes and are injured in people with glaucoma. Gao et al. focused on THC’s potential use as a therapeutic agent for traumatic brain injury in rats. In their study, the expression levels of microtubule-associated protein 1A/1B-light chain 3 (LC3) and Beclin-1 were increased, and those of the ubiquitin-binding protein p62 were significantly decreased after THC treatment, indicating that THC modulated activation of the autophagy pathway, which was shown to play a protective role against brain trauma in rats . Another study by Tyagi et al. illustrated the protective effects of THC associated with its antiautophagic effects . However, in this case, THC’s administration was discovered to block the conversion of LC3-I to LC3-II. Therefore, THC inhibits the autophagy pathway and serves as a neuroprotective factor. Although it often acts as a double-edged sword, autophagy is an important self-protective mechanism in cells. Excessive autophagy can damage cells, but moderate autophagy may aid in neuronal survival because increased autophagic flux boosts the clearance of unnecessary proteins and damaged mitochondria . The role of THC is to balance excess and deficient autophagy. In terms of autophagy in the eye, autophagy inhibition is a promising target for preventing retinal ganglion cell degeneration and axonal degeneration in glaucoma . In summary, the modulation of autophagy through THC administration may be an important neuroprotective intervention in glaucomatous neuropathies. Several coagulation factors are linked to proliferative DR. The β-thromboglobulin concentration was higher, the platelet factor 4 level was significantly increased, and fibrinogen was found to be aggregated in patients with DR compared with controls . β-thromboglobulin and platelet factor 4 are two proteins involved in platelet activation, and fibrinogen, an immediate precursor of fibrin, induces platelet aggregation through the COX pathway. A study comparing the effects of THC and curcuminoids on human platelet aggregation and blood coagulation was conducted by Chapman et al. The results showed that all curcuminoids, with the exclusion of curcumin, reduced platelet aggregation and that THC was the most potent curcuminoid. THC and other curcuminoids were found to act by inhibiting the ability of the COX enzyme to synthesize the formation of proinflammatory thromboxanes . In addition, the effect of curcuminoids in reversing aggregation is mostly due to platelet aggregation induced by arachidonic acid . However, the pathways involved in platelet aggregation are strongly related to different factors, and these experimental antiplatelet effects have not yet been confirmed in vivo due to the low bioavailability of current curcumin derivatives. Therefore, although new formulations of curcumin and THC may hold therapeutic promise, future studies are required to understand the antiplatelet effects of these compounds on the eye . Curcumin has been used throughout history for the prevention of various conditions. The potential benefits of curcumin in several major ocular diseases, such as age-related cataract, glaucoma, AMD, and DR, are under investigation. However, the low bioavailability of curcumin limits its effective concentration. Therefore, two main approaches to overcome this issue were discussed above: the formulation of curcumin for its delivery and the use of curcumin metabolites. The former has long been intensively studied and can roughly be summarized as three strategies: lipid addition, absorption and dispersion on matrices; particle size reduction; and surface property modulation. Pharmaceutical applications of cumin usually combine several methods to reach its ideal application. The latter is a relatively new field to researchers and pharmaceutical companies, and among curcumin metabolites and conjugates, THC has been the focus. Ocular diseases lead to visual impairments through oxidative stress, ER stress, inflammation, and autophagy. However, THC was found to possess antioxidative, anti-inflammatory, anti-VEGF, and neuroprotective properties in vivo and in vitro. Combined with the above arguments, this review suggests the potential effect of THC against ocular impairment. However, few direct experiments of THC have been conducted in the eyes. First, since vessels in the eye belong to the peripheral vascular system, achieving a therapeutic level of curcumin in the eye is difficult. Once the limitation of curcumin bioavailability has been overcome, ensuring the stability of curcumin in other organs, such as the liver, spleen, neural system, and cardiovascular system, is an additional issue. Fortunately, THC was reported to have better bioavailability and stability than curcumin, explaining the attention on THC in recent years. In addition, botanical compounds are generally used as prophylactic treatments instead of remedies. In general, further investigation is required for curcumin and its related compounds to be applied as noninvasive and preventative complementary compounds against eye diseases.
Serum metabolome indicators of early childhood development in the Brazilian National Survey on Child Nutrition (ENANI-2019)
5bce67eb-8d64-43eb-8655-a31fb996d72c
11805503
Biochemistry[mh]
The early years of life are characterized by remarkable growth and neurodevelopment . Child development encompasses many dimensions of a child’s well-being. It is generally described into specific streams or domains of development, including motor development, speech and language progression, cognitive abilities, and socio-emotional skills . Neurogenesis starts in the intrauterine environment and continues to shape brain morphology and plasticity after birth . The interval from birth to eight years represents a unique and critical period in which the development of a child’s brain can be significantly shaped. This phenomenon encompasses special sensitivity to experiences that promote cognitive, social, emotional, and physical development . The acquisition of developmental skills results from an interplay between the development of the nervous system and other organ systems . Optimal brain development requires a stimulating environment, adequate nutrients, and social interaction with attentive caregivers . The early childhood development (ECD) impacts long-term individual and population health outcomes, including the ability to learn, achievements in school and later life, citizenship, involvement in community activities, and overall quality of life . An estimated 250 million children under five in low- and middle-income countries are at risk of not attaining their developmental potential, leading to an average deficit of 19.8% in adult annual income . In 2015, the importance of ECD was recognized and incorporated into the ‘United Nations Sustainable Development Goals’. Studies have demonstrated that early child metabolome disturbances may be implicated in the pathogenesis of non-typical neurodevelopment, including autism spectrum disorder (ASD; ; ), communication skills development , and risk of impaired neurocognitive development . Children diagnosed with neurodevelopmental delays tend to experience more favorable treatment outcomes when these conditions are identified and addressed earlier . Therefore, biomarkers are urgently needed to predict an infant’s potential risk for developmental issues while gaining new insights into underlying disease mechanisms. Although child development has been a focus of research for decades, studies in low- and middle-income countries on the potential role of circulating metabolites in ECD remain limited. The present study aims to identify associations between children’s serum metabolome and ECD. Identifying the relationships between metabolic phenotypes and ECD outcomes can elucidate pathways and targets for potential interventions, such as serum metabolites associated with food consumption in infancy . Study design and participants This cross-sectional study uses data from the Brazilian National Survey on Child Nutrition (ENANI-2019). ENANI-2019 is a population-based household survey with national coverage and representativeness of children aged <5 years that has investigated dietary intake, anthropometric status, and micronutrient deficiency. Details of the ENANI-2019 sample design, study completion, and methodology have been published previously . ENANI-2019 data collection took place from February 2019 and ended in March 2020 due to the COVID-19 pandemic. Covariates Trained interviewers administered a structured questionnaire to collect socio-demographic, health and anthropometric data . The variables included in this study were: the child’s age (in months), sex (male or female), educational level of the mother/caregiver of the child (0–7, 8–10, and ≥11 completed years of education), mode of delivery (vaginal or c-section), monthly familiar income (<62.2, 62.2–24.4, 124.5–248.7, >248.7 USD). Body weight (kg) and length or height (m) were used to calculate the weight for length/height z-scores (w/h z-scores). Also, the w/h z-scores were classified based on the age and sex of the child, according to World Health Organization (WHO) standards . The child’s diet quality was assessed using the minimum dietary diversity (MDD) indicator proposed by the WHO (WHO & United Nations Children’s Fund (UNICEF), 2021). MDD requires the consumption of foods from at least five of eight food groups during the previous day. The eight food groups are (1) breast milk; (2) grains, roots, tubers, and plantains; (3) pulses (beans, peas, lentils), nuts, and seeds; (4) dairy products (milk, infant formula, yogurt, cheese); (5) flesh foods (meat, fish, poultry, organ meats); (6) eggs; (7) vitamin-A rich fruits and vegetables; and (8) other fruits and vegetables. The variable was dichotomized as children who had consumed ≥5 or<5 food groups. Data to produce this indicator was derived from the ENANI’s structured questionnaire related to foods consumed the day before the first interview Furthermore, in ENANI-2019 caregivers fulfilled one 24 hr food recall (R24h) reporting all children’s food and beverage intake in the day before the interview. Child fiber intake (grams) was obtained from the R24h. Assessment of ECD The Survey of Well-being of Young Children (SWYC) milestones questionnaire was used to assess ECD. This questionnaire inquiries about motor, language, and cognitive milestones appropriate for the age range of the form . It is recognized by the American Academy of Pediatrics and is a widely disseminated screening tool for identifying developmental delays in children aged 1–65 months . The SWYC milestones questionnaire was developed and validated by , and a version of the SWYC (SWYC-BR) has been translated, cross-culturally adapted, and validated for use in Brazilian children . A recently published study evaluated the internal consistency of the SWYC-BR milestones questionnaire using the ENANI-2019 data and Cronbach’s alpha, which showed adequate performance (0.965; 95% CI: 0.963–0.968; ). SWYC-BR comprises 12 distinct forms, each aligned with the recommended age for routine pediatric wellness visits from 1 to 65 months (specifically at 1–3, 4–5, 6–8, 9–11, 12–14, 15–17, 18–22, 23–28, 29–34, 35–46, 47–58, and 59–65 months). Each form is a 10-item questionnaire. For each item, a parent/caregiver can choose one of three answers that best describe their child (‘not yet’, ‘somewhat’, or ‘very much’). The ENANI-2019 data collection system automatically selected the appropriate set of developmental milestones according to the child’s age. The corrected age was used to select the proper set of developmental milestones for children under two years who were born preterm (<37 gestational weeks; ). Developmental quotient The Developmental quotient (DQ) is a continuous variable calculated by dividing developmental age by chronological age. The item response theory and graded response models were used to estimate development age according to the child’s developmental milestones already reached . The analysis used the full information method and incorporated the complex sample design in the Mplus software version 7 (Los Angeles, EUA; ). The estimated model allowed the construction of an item characteristic curve (ICC) for each milestone, representing the change in the probability of a given response (sometimes or always) and the discrimination of each milestone development by age, estimating the development age . The ICC and its coefficients were used to estimate developmental age according to the developmental milestones reached by each child. This methodology has been previously used to assess ECD with the SWYC and the Denver Test . Differently for test scores use, these methods avoid the influence of items set in the results. This approach enables the assessment of each item rather than just the final score, as the item set might be biased—meaning there could be an imbalance in the number of activities more commonly achieved among the specified items. Consequently, reaching the maximum score on the scale may be easier for certain age groups. The DQ was calculated by dividing the developmental age by the chronological age . DQ equals to 1 indicates that the expected age milestones are attained. DQ values <1 or>1 suggest attaining age milestones below or above expectations. This method allows analyzing the outcome as a continuous variable. Blood collection Details of the procedures adopted for blood collection and laboratory analyses have been previously described . Fasting was not required, and changes in medication were not necessary to draw the blood sample. Briefly, 8 mL of blood sample were drawn and distributed in a trace tube (6 mL) and EDTA tube (2 mL) and transported in a cooler with a controlled temperature (from 2 °C to 8 °C) to a partner laboratory. Aliquots from the trace tube were centrifugated and the serum was transferred to a second trace tube and stored at freezing temperature (–20 °C) until laboratory analyses were performed. Serum samples with sufficient volume were stored in a biorepository (–80 °C) prior to untargeted metabolome analysis. Serum processing and metabolome analysis Untargeted metabolomic analysis was performed on serum filtrate samples using a high-throughput platform based on multisegment injection-capillary electrophoresis-mass spectrometry (MSI-CE-MS). Samples were first thawed slowly on ice, where 50 µL were aliquoted and then diluted four-fold to a final volume of 200 µL in deionized water with an internal standard mix containing 40 µmol/L 3-chlorotyrosine, 3-fluorophenylalanine, 2-fluorotyrosine, trimethylamine-N-oxide[D9], γ-amino butyrate[D6], choline[D9], creatinine[D3], ornithine [ 15 N2], histidine[ 15 Nalpha], carnitine[D3], 3-methylhistidine[D3] and 2 mmol/L glucose[ 13 C6]. Diluted serum samples were then transferred to pre-rinsed Nanosep ultracentrifuge devices with a molecular weight cutoff of 3 kDa (Cytiva Life Sciences, Malborough, USA), and centrifuged at 10,000 × g for 15 min to remove proteins. Following ultrafiltration, 20 µL of diluted serum filtrate samples were transferred to CE-compatible polypropylene vials and analyzed using MSI-CE-MS. A pooled QC was also prepared to evaluate technical precision throughput the study using 50 µL aliquots collected from the first batch of 979 serum samples processed. Overall, serum specimens were prepared and run as three separate batches of 979, 1990, and 2035 samples over an eighteen-month period. A QC-based batch correction algorithm was applied to reduce long-term system drift and improve reproducibility with QC samples analyzed in a randomized position within each analytical run . High-throughput MSI-CE-MS metabolomic analyses was performed using an Agilent 6230B time-of-flight mass spectrometer (Agilent, Santa Clara, USA) with an electrospray ion source coupled to an Agilent G7100A capillary electrophoresis (CE) instrument (Agilent, Santa Clara, USA). The serum metabolome coverage comprises primarily cationic/zwitterionic and anionic polar metabolites (filtrate/unbound to protein) when using full-scan data acquisition under positive and negative ionization modes. Given the isocratic separation conditions with steady-state ionization via a sheath liquid interface, MSI-CE-MS increases sample throughput using a serial injection format where 12 samples and a pooled QC are analyzed within a single analytical run. Instrumental and data preprocessing parameters have been previously described . The technical precision for serum metabolites measured in pooled QC samples had a median coefficient of variation (CV) of 10.5% with a range from 2.7 to 31% (n=422), which were analyzed by MSI-CE-MS in every analytical run throughout the study following batch correction. Overall, seventy-five circulating polar metabolites were measured in most samples (frequency >50%) with adequate technical precision (CV <30%) with the exception of symmetric dimethylarginine that was removed. Most metabolites were identified by spiking (i.e. co-migration with low mass error <5 ppm) and quantified with authentic standards, except for 13 unknown metabolites that were annotated based on their accurate mass ( m/z ), relative migration time (RMT), ionization mode (N or P), and most likely molecular formula. The metabolite distributions were severely asymmetric (average skewness = 40) and leptokurtic (average kurtosis = 1810). Therefore, a log 10 transformation was performed on each metabolite, which reduced average skewness to 2.4 and kurtosis to 20.8. Metabolite z-scores>5 or < –5 were considered outliers and were removed (0.12% of the data). Missing data were treated following the procedures recommended by with one modification. Instead of using the “80% rule” of excluding metabolites with <80% non-missing cases (>20% missing cases) in all dependent variable categories, a less stringent 50% rule was applied to reduce the risk of excluding relevant serum metabolites. For the sole purpose of performing the exclusions, the DQ was recoded as a categorical variable (DQ ≥1 as ‘within or above expectations’, and DQ <1 as ‘below expectations’) to avoid removing metabolites that had a missingness pattern associated with DQ. Cysteine-S-sulfate and an unknown anionic metabolite (209.030:3.04:N; C 6 H 10 O 8 ) had >50% missing cases in both DQ categories and were thus excluded. Of the remaining 72 serum metabolites that satisfied the above selection criteria, 12.5% of the data were missing due to matrix interferences, and 1.5% were missing due to non-detection (i.e. below method detection limit). Missing data due to matrix interference were imputed using the random forest (RF) method, and non-detection missing data were imputed using quantile regression imputation of left-censored data (QRILC; ). The RF method used all serum metabolome data to predict what value the missing cases would likely have taken. Statistical analysis We carried out descriptive and inferential analyses. The descriptive analyses were based on frequency with a 95% confidence interval (95% CI) and Student t-test or ANOVA were used to compare DQ in groups. The Pearson correlation was first used to explore the correlations between circulating metabolites (exposure) and DQ (outcome). To better assess the predictiveness of each metabolite in a single model, a partial least squares regression (PLSR) was conducted . Partial least squares (PLS)-based analyses are the most commonly used analyses when determining the predictiveness of a large number of variables as they avoid issues with collinearity, sample size, and corrections for multiple-testing . The PLSR reduces the metabolites to orthogonal components, which are maximally predictive of the outcome and generate an indicator of how much each metabolite contributes to predicting the outcome, called the variable importance projection (VIP). Because our goal was not to determine the components that are maximally predictive of DQ but to rank the metabolites on their contribution to predicting the outcome, we focused on the VIPs from this analysis. The PLSR was trained on 80% of the data, and the remaining 20% was used as test data. Training and test data were randomly allocated. The model with the optimal number of components considering predictive value and parsimony was used to generate VIP values. Serum metabolites were selected for further analyses if they had a VIP ≥1. The subsequent step was to disentangle the selected metabolites from confounding variables. A Directed Acyclic Graph (DAG; ) was used to more objectively determine the minimally sufficient adjustments for the regression models to account for potentially confounding variables while avoiding collider variables and variables in the metabolite-DQ causal pathways, which if controlled for would unnecessarily remove explained variance from the metabolites and hamper our ability to detect biomarkers. To minimize bias from subjective judgments of which variables should and should not be included as covariates, the DAG only included variables for which there was evidence from systematic reviews or meta-analysis of relationships with both the metabolome and DQ . Birth weight, breastfeeding, child’s diet quality, the child’s nutritional status, and the child’s age were the minimal adjustments suggested by the DAG. Birth weight was a variable with high missing data, and indicators of breastfeeding practice data (referring to exclusive breastfeeding until 6 months and/or complemented until 2 years) were collected only for children aged 0–23 months. Therefore, those confounders were not included as adjustments. Child’s diet quality was evaluated as MDD, the child’s nutritional status as w/h z-score, and the child’s age in months. Multiple linear regression between each metabolite and DQ were performed and adjusted for the covariates. Additional regressions were done to explore interactions between the metabolites, sex, and age. Since several circulating metabolites most associated with DQ are relevant to microbiome health, those circulating metabolites may be biomarkers of a beneficial effect of the microbiome on development. We employed mediation analyses to explore the potential role of specific serum metabolites as mediators in the relationship between certain exposure variables related to the microbiome establishment in early life, such as mode of delivery , child’s diet quality , as well as child fiber intake and DQ. For the mediation analyses, we adopted the approach proposed by , which provides independent estimates for the average causal mediation effect (ACME - the effect of the exposure variable on DQ that is mediated by the metabolite), the average direct effect (ADE - controlling for metabolite concentrations) of the exposure variable on DQ, and the total effect of the exposure variable on DQ (mediation plus direct effect). Bootstrap tests using 5,000 iterations evaluated whether the effects were statistically significant. Due to the exploratory nature of the mediation analysis, significance was not corrected for multiple testing. The child’s age (in months) and w/h z-score were entered as covariates. All other results were considered statistically significant at an adjusted-p ≤0.05 after the Benjamini-Hochberg correction for multiple comparisons. Statistical analyses were carried out using the R programming language (R Core Team; http://www.R-project.org ), through JupyterLab, using the following packages: ggplot2 ( http://ggplot2.org ), interactions ( https://cran.r-project.org ), dplyr ( https://cran.r-project.org ), tidyverse ( https://www.tidyverse.org/ ), pls , plsVarSel ( https://github.com/khliland/plsVarSel ), mediation . Ethical aspects The ENANI-2019 was approved by the Research Ethics Committee of the Clementino Fraga Filho University Hospital of the Federal University of Rio de Janeiro (UFRJ) under the number CAAE 89798718.7.0000.5257. Data were collected after a parent/caregiver of the child authorized participation in the study through an informed consent form and following the principles of the Declaration of Helsinki. This cross-sectional study uses data from the Brazilian National Survey on Child Nutrition (ENANI-2019). ENANI-2019 is a population-based household survey with national coverage and representativeness of children aged <5 years that has investigated dietary intake, anthropometric status, and micronutrient deficiency. Details of the ENANI-2019 sample design, study completion, and methodology have been published previously . ENANI-2019 data collection took place from February 2019 and ended in March 2020 due to the COVID-19 pandemic. Trained interviewers administered a structured questionnaire to collect socio-demographic, health and anthropometric data . The variables included in this study were: the child’s age (in months), sex (male or female), educational level of the mother/caregiver of the child (0–7, 8–10, and ≥11 completed years of education), mode of delivery (vaginal or c-section), monthly familiar income (<62.2, 62.2–24.4, 124.5–248.7, >248.7 USD). Body weight (kg) and length or height (m) were used to calculate the weight for length/height z-scores (w/h z-scores). Also, the w/h z-scores were classified based on the age and sex of the child, according to World Health Organization (WHO) standards . The child’s diet quality was assessed using the minimum dietary diversity (MDD) indicator proposed by the WHO (WHO & United Nations Children’s Fund (UNICEF), 2021). MDD requires the consumption of foods from at least five of eight food groups during the previous day. The eight food groups are (1) breast milk; (2) grains, roots, tubers, and plantains; (3) pulses (beans, peas, lentils), nuts, and seeds; (4) dairy products (milk, infant formula, yogurt, cheese); (5) flesh foods (meat, fish, poultry, organ meats); (6) eggs; (7) vitamin-A rich fruits and vegetables; and (8) other fruits and vegetables. The variable was dichotomized as children who had consumed ≥5 or<5 food groups. Data to produce this indicator was derived from the ENANI’s structured questionnaire related to foods consumed the day before the first interview Furthermore, in ENANI-2019 caregivers fulfilled one 24 hr food recall (R24h) reporting all children’s food and beverage intake in the day before the interview. Child fiber intake (grams) was obtained from the R24h. The Survey of Well-being of Young Children (SWYC) milestones questionnaire was used to assess ECD. This questionnaire inquiries about motor, language, and cognitive milestones appropriate for the age range of the form . It is recognized by the American Academy of Pediatrics and is a widely disseminated screening tool for identifying developmental delays in children aged 1–65 months . The SWYC milestones questionnaire was developed and validated by , and a version of the SWYC (SWYC-BR) has been translated, cross-culturally adapted, and validated for use in Brazilian children . A recently published study evaluated the internal consistency of the SWYC-BR milestones questionnaire using the ENANI-2019 data and Cronbach’s alpha, which showed adequate performance (0.965; 95% CI: 0.963–0.968; ). SWYC-BR comprises 12 distinct forms, each aligned with the recommended age for routine pediatric wellness visits from 1 to 65 months (specifically at 1–3, 4–5, 6–8, 9–11, 12–14, 15–17, 18–22, 23–28, 29–34, 35–46, 47–58, and 59–65 months). Each form is a 10-item questionnaire. For each item, a parent/caregiver can choose one of three answers that best describe their child (‘not yet’, ‘somewhat’, or ‘very much’). The ENANI-2019 data collection system automatically selected the appropriate set of developmental milestones according to the child’s age. The corrected age was used to select the proper set of developmental milestones for children under two years who were born preterm (<37 gestational weeks; ). The Developmental quotient (DQ) is a continuous variable calculated by dividing developmental age by chronological age. The item response theory and graded response models were used to estimate development age according to the child’s developmental milestones already reached . The analysis used the full information method and incorporated the complex sample design in the Mplus software version 7 (Los Angeles, EUA; ). The estimated model allowed the construction of an item characteristic curve (ICC) for each milestone, representing the change in the probability of a given response (sometimes or always) and the discrimination of each milestone development by age, estimating the development age . The ICC and its coefficients were used to estimate developmental age according to the developmental milestones reached by each child. This methodology has been previously used to assess ECD with the SWYC and the Denver Test . Differently for test scores use, these methods avoid the influence of items set in the results. This approach enables the assessment of each item rather than just the final score, as the item set might be biased—meaning there could be an imbalance in the number of activities more commonly achieved among the specified items. Consequently, reaching the maximum score on the scale may be easier for certain age groups. The DQ was calculated by dividing the developmental age by the chronological age . DQ equals to 1 indicates that the expected age milestones are attained. DQ values <1 or>1 suggest attaining age milestones below or above expectations. This method allows analyzing the outcome as a continuous variable. Details of the procedures adopted for blood collection and laboratory analyses have been previously described . Fasting was not required, and changes in medication were not necessary to draw the blood sample. Briefly, 8 mL of blood sample were drawn and distributed in a trace tube (6 mL) and EDTA tube (2 mL) and transported in a cooler with a controlled temperature (from 2 °C to 8 °C) to a partner laboratory. Aliquots from the trace tube were centrifugated and the serum was transferred to a second trace tube and stored at freezing temperature (–20 °C) until laboratory analyses were performed. Serum samples with sufficient volume were stored in a biorepository (–80 °C) prior to untargeted metabolome analysis. Untargeted metabolomic analysis was performed on serum filtrate samples using a high-throughput platform based on multisegment injection-capillary electrophoresis-mass spectrometry (MSI-CE-MS). Samples were first thawed slowly on ice, where 50 µL were aliquoted and then diluted four-fold to a final volume of 200 µL in deionized water with an internal standard mix containing 40 µmol/L 3-chlorotyrosine, 3-fluorophenylalanine, 2-fluorotyrosine, trimethylamine-N-oxide[D9], γ-amino butyrate[D6], choline[D9], creatinine[D3], ornithine [ 15 N2], histidine[ 15 Nalpha], carnitine[D3], 3-methylhistidine[D3] and 2 mmol/L glucose[ 13 C6]. Diluted serum samples were then transferred to pre-rinsed Nanosep ultracentrifuge devices with a molecular weight cutoff of 3 kDa (Cytiva Life Sciences, Malborough, USA), and centrifuged at 10,000 × g for 15 min to remove proteins. Following ultrafiltration, 20 µL of diluted serum filtrate samples were transferred to CE-compatible polypropylene vials and analyzed using MSI-CE-MS. A pooled QC was also prepared to evaluate technical precision throughput the study using 50 µL aliquots collected from the first batch of 979 serum samples processed. Overall, serum specimens were prepared and run as three separate batches of 979, 1990, and 2035 samples over an eighteen-month period. A QC-based batch correction algorithm was applied to reduce long-term system drift and improve reproducibility with QC samples analyzed in a randomized position within each analytical run . High-throughput MSI-CE-MS metabolomic analyses was performed using an Agilent 6230B time-of-flight mass spectrometer (Agilent, Santa Clara, USA) with an electrospray ion source coupled to an Agilent G7100A capillary electrophoresis (CE) instrument (Agilent, Santa Clara, USA). The serum metabolome coverage comprises primarily cationic/zwitterionic and anionic polar metabolites (filtrate/unbound to protein) when using full-scan data acquisition under positive and negative ionization modes. Given the isocratic separation conditions with steady-state ionization via a sheath liquid interface, MSI-CE-MS increases sample throughput using a serial injection format where 12 samples and a pooled QC are analyzed within a single analytical run. Instrumental and data preprocessing parameters have been previously described . The technical precision for serum metabolites measured in pooled QC samples had a median coefficient of variation (CV) of 10.5% with a range from 2.7 to 31% (n=422), which were analyzed by MSI-CE-MS in every analytical run throughout the study following batch correction. Overall, seventy-five circulating polar metabolites were measured in most samples (frequency >50%) with adequate technical precision (CV <30%) with the exception of symmetric dimethylarginine that was removed. Most metabolites were identified by spiking (i.e. co-migration with low mass error <5 ppm) and quantified with authentic standards, except for 13 unknown metabolites that were annotated based on their accurate mass ( m/z ), relative migration time (RMT), ionization mode (N or P), and most likely molecular formula. The metabolite distributions were severely asymmetric (average skewness = 40) and leptokurtic (average kurtosis = 1810). Therefore, a log 10 transformation was performed on each metabolite, which reduced average skewness to 2.4 and kurtosis to 20.8. Metabolite z-scores>5 or < –5 were considered outliers and were removed (0.12% of the data). Missing data were treated following the procedures recommended by with one modification. Instead of using the “80% rule” of excluding metabolites with <80% non-missing cases (>20% missing cases) in all dependent variable categories, a less stringent 50% rule was applied to reduce the risk of excluding relevant serum metabolites. For the sole purpose of performing the exclusions, the DQ was recoded as a categorical variable (DQ ≥1 as ‘within or above expectations’, and DQ <1 as ‘below expectations’) to avoid removing metabolites that had a missingness pattern associated with DQ. Cysteine-S-sulfate and an unknown anionic metabolite (209.030:3.04:N; C 6 H 10 O 8 ) had >50% missing cases in both DQ categories and were thus excluded. Of the remaining 72 serum metabolites that satisfied the above selection criteria, 12.5% of the data were missing due to matrix interferences, and 1.5% were missing due to non-detection (i.e. below method detection limit). Missing data due to matrix interference were imputed using the random forest (RF) method, and non-detection missing data were imputed using quantile regression imputation of left-censored data (QRILC; ). The RF method used all serum metabolome data to predict what value the missing cases would likely have taken. We carried out descriptive and inferential analyses. The descriptive analyses were based on frequency with a 95% confidence interval (95% CI) and Student t-test or ANOVA were used to compare DQ in groups. The Pearson correlation was first used to explore the correlations between circulating metabolites (exposure) and DQ (outcome). To better assess the predictiveness of each metabolite in a single model, a partial least squares regression (PLSR) was conducted . Partial least squares (PLS)-based analyses are the most commonly used analyses when determining the predictiveness of a large number of variables as they avoid issues with collinearity, sample size, and corrections for multiple-testing . The PLSR reduces the metabolites to orthogonal components, which are maximally predictive of the outcome and generate an indicator of how much each metabolite contributes to predicting the outcome, called the variable importance projection (VIP). Because our goal was not to determine the components that are maximally predictive of DQ but to rank the metabolites on their contribution to predicting the outcome, we focused on the VIPs from this analysis. The PLSR was trained on 80% of the data, and the remaining 20% was used as test data. Training and test data were randomly allocated. The model with the optimal number of components considering predictive value and parsimony was used to generate VIP values. Serum metabolites were selected for further analyses if they had a VIP ≥1. The subsequent step was to disentangle the selected metabolites from confounding variables. A Directed Acyclic Graph (DAG; ) was used to more objectively determine the minimally sufficient adjustments for the regression models to account for potentially confounding variables while avoiding collider variables and variables in the metabolite-DQ causal pathways, which if controlled for would unnecessarily remove explained variance from the metabolites and hamper our ability to detect biomarkers. To minimize bias from subjective judgments of which variables should and should not be included as covariates, the DAG only included variables for which there was evidence from systematic reviews or meta-analysis of relationships with both the metabolome and DQ . Birth weight, breastfeeding, child’s diet quality, the child’s nutritional status, and the child’s age were the minimal adjustments suggested by the DAG. Birth weight was a variable with high missing data, and indicators of breastfeeding practice data (referring to exclusive breastfeeding until 6 months and/or complemented until 2 years) were collected only for children aged 0–23 months. Therefore, those confounders were not included as adjustments. Child’s diet quality was evaluated as MDD, the child’s nutritional status as w/h z-score, and the child’s age in months. Multiple linear regression between each metabolite and DQ were performed and adjusted for the covariates. Additional regressions were done to explore interactions between the metabolites, sex, and age. Since several circulating metabolites most associated with DQ are relevant to microbiome health, those circulating metabolites may be biomarkers of a beneficial effect of the microbiome on development. We employed mediation analyses to explore the potential role of specific serum metabolites as mediators in the relationship between certain exposure variables related to the microbiome establishment in early life, such as mode of delivery , child’s diet quality , as well as child fiber intake and DQ. For the mediation analyses, we adopted the approach proposed by , which provides independent estimates for the average causal mediation effect (ACME - the effect of the exposure variable on DQ that is mediated by the metabolite), the average direct effect (ADE - controlling for metabolite concentrations) of the exposure variable on DQ, and the total effect of the exposure variable on DQ (mediation plus direct effect). Bootstrap tests using 5,000 iterations evaluated whether the effects were statistically significant. Due to the exploratory nature of the mediation analysis, significance was not corrected for multiple testing. The child’s age (in months) and w/h z-score were entered as covariates. All other results were considered statistically significant at an adjusted-p ≤0.05 after the Benjamini-Hochberg correction for multiple comparisons. Statistical analyses were carried out using the R programming language (R Core Team; http://www.R-project.org ), through JupyterLab, using the following packages: ggplot2 ( http://ggplot2.org ), interactions ( https://cran.r-project.org ), dplyr ( https://cran.r-project.org ), tidyverse ( https://www.tidyverse.org/ ), pls , plsVarSel ( https://github.com/khliland/plsVarSel ), mediation . The ENANI-2019 was approved by the Research Ethics Committee of the Clementino Fraga Filho University Hospital of the Federal University of Rio de Janeiro (UFRJ) under the number CAAE 89798718.7.0000.5257. Data were collected after a parent/caregiver of the child authorized participation in the study through an informed consent form and following the principles of the Declaration of Helsinki. In total, 14,558 children under five years were evaluated, and 12,598 children aged 6–59 months were eligible for blood collection, of whom 8829 (70%) had the biological material collected. Due to the costs involved in the metabolome analysis, it was necessary to reduce the sample size that is equivalent to 57% of total participants from ENANI-2019 with stored blood specimens. Therefore, the infants were stratified by age groups (6–11, 12–23, and 24–59 months) and health conditions such as anemia and micronutrient deficiencies. The selection process aimed to represent diverse health statuses to the original sample. Ultimately, 5004 children were selected for the final sample through a random sampling process that ensured a balanced representation across these groups . The mean infant age was 34 months, and 48.9% of the participants were between 36 and 59 months. Almost half of the children evaluated lived in the North (24.8%) and Northeast (22.6%) regions. Most children were of normal weight, and 25% were at risk or had excessive weight. The prevalence of MDD was 59.3%. Most children (72.4%) lived in households with a monthly income greater than USD 248.7, the Brazilian minimum wage, and 51% had a mother/caregiver with at least 11 years of education . The DQ mean (95% CI) was 0.98 (0.97; 0.99). Overall, children had lower DQs if they were male (p=2 x 10 –14 ), older (p<2 x 10 –16 ), had lower weight for height (p=4 x 10 –5 ), <5 food groups than the MDD (p=3 x 10 –4 ), were from the northern region (p=2 x 10 –14 ), had a lower monthly family income ( r =0.05, p=4 x 10 –4 ), and a mother/caregiver with fewer years of education (p=10 –15 ). Mode of delivery was not significantly associated with DQ (p=0.724; ). Given the size of our sample, statistical power is not an issue in our analyses. Considering an alpha of 0.05 for a two-sided test, a sample size of 5000 has 95% power to detect a correlation of r =0.05 and an effect of f2=0.003 in a multiple regression model with 4 predictors. As an initial assessment of the zero-order associations between DQ and serum metabolites, Pearson’s correlations were performed. This revealed 26 negative and 2 positive statistically significant associations . Two unknown anions, annotated by their accurate mass, relative migration time, and ionization mode as 117.0552:1.67:N and 135.0293:1.71:N presented positive correlations. These two anions were tentatively identified as 3-hydroxyvaleric acid ( r =0.05, 95% CI [0.02; 0.08], adjusted-p=0.001) and erythronic acid ( r =0.04, 95% CI [0.01; 0.07], adjusted-p=0.011; ). The highest correlations were all negative and were for serum phenylacetylglutamine (PAG, r =–0.16, 95% CI [–0.18; –0.13], adjusted-p=10 –26 ), cresol sulfate (CS, r =–0.15, 95% CI [–0.18; –0.12], adjusted-p=10 –24 ), hippuric acid (HA, r =–0.14, 95% CI [–0.17; –0.11], adjusted-p=2 x 10 –22 ), creatinine (Crtn, r =–0.13, 95% CI [–0.16; –0.1], adjusted-p=5 x 10 –19 ), trimethylamine- N -oxide (TMAO, r =–0.1, 95% CI [–0.13; –0.07], adjusted-p=2 x 10 –11 ), citrulline (Cit, r =–0.09, 95% CI [–0.12; –0.07], adjusted-p=3 x 10 –10 ), deoxycarnitine or γ-butyrobetaine (dC0, r =–0.09, 95% CI [–0.12; –0.06], adjusted-p=3 x 10 –9 ), and 3-methylhistidine (MeHis, r =–0.07, 95% CI [–0.1; –0.05], adjusted-p=10 –6 ). The model with three components was used for parsimony and to avoid overfitting. The serum metabolites that had the highest loads on the components were the branched-chain amino acids, including leucine (Leu), isoleucine (Ile), and valine (Val) on component 1, the uremic toxins, CS and PAG on component 2 and betaine and amino acids, mainly glutamine (Gln) and asparagine (Asn) on component 3 . The three components accounted for 39.8% of the total metabolite variance and 4.3% of the DQ variance . Twenty-eight serum metabolites had a VIP ≥1 . These metabolites were then entered into the multiple linear regressions adjusted for the child’s diet quality (MDD), nutritional status (weight for length/height z-scores - w/h z-score), and age (months), which were the minimum adjustments indicated by the DAG as described in the statistical analysis section. We found inverse associations of serum concentrations of CS ( β =–0.07; adjusted-p <0.001), HA ( β =–0.06; adjusted-p <0.001), PAG ( β =–0.06; adjusted-p <0.001), and TMAO ( β =–0.05; adjusted-p=0.002) with the DQ of children, which were also significant in the models described below . Since the child’s diet and metabolism may change as the child ages and as neurodevelopmental disorders occur more frequently in boys than in girls, interactions between the metabolites and child age (in months) and between metabolites and child sex were also tested to evaluate a possible modification of the effects by these variables in the models. Considering the interactions between serum metabolites and child age, we observed associations for Crtn (β-interaction=0.05; adjusted-p=0.003), HA (β-interaction=0.04; adjusted-p=0.041), MeHis (β-interaction=0.04; adjusted-p=0.018), PAG (β-interaction=0.04; adjusted-p=0.018), TMAO (β-interaction=0.05; adjusted-p=0.003), and Val (β-interaction=0.04; adjusted-p=0.039; ). Comparing children one standard deviation (SD) above the mean child age with those one standard deviation below (49 months vs. 19 months), we observed opposite directions for the association with DQ for serum Crtn (for children aged - 1 SD: β = - 0.05; p=0.01;+1 SD: β =0.05; p=0.02) and for MeHis (- 1 SD: β = - 0.04; p=0.04;+1 SD: β =0.04; p=0.03; ). For serum TMAO, PAG, Val, and HA, the effect size went from a negative value for younger children to a non-significant value for older children . No associations were found for interactions between child sex and each metabolite on DQ (data not shown). Mediation analyses identified that serum PAG was a mediator for the relationship between mode of delivery (ACME = 0.003, p=2 x 10 –16 ), child’s diet quality (ACME = 0.002, p=0.019), and child fiber intake (ACME = - 0.002; p=0.034) and DQ . Serum HA (ACME = - 0.004, p<0.001) and TMAO (ACME = - 0.002, p=0.022) were also mediators for the relationship between child fiber intake and DQ . According to the mediation analysis, having a vaginal delivery ( β =–0.05; p<0.001), not achieving MDD ( β =–0.03; p=0.019), and greater total fiber intake ( β =0.03; p=0.031) increased the serum PAG concentration, that in turn was inversely associated with DQ. Moreover, a higher dietary fiber intake was directly associated with HA ( β =0.06; p<0.001) and TMAO ( β =0.03; p=0.018), which also was inversely associated with DQ. A limited number of investigations have examined the link between blood, urine, or stool metabolites and early stages of child development, with most studies focusing on comparing the metabolic profile of patients with developmental disorders against healthy controls . This is the first study to explore the association between child serum metabolome and ECD on a population-based level. According to our results, serum concentrations of PAG, CS, HA and TMAO were inversely associated with child DQ, a validated measure to express ECD. The associations of PAG, HA, TMAO, and Val on DQ were also age-dependent and showed stronger associations for children <34 months. In addition, inverse associations were found for serum levels of MeHis and Crtn with DQ for children <34 months, whereas direct associations were found for children >34 months. PAG is the glutamine conjugate of phenylacetic acid generated from the gut microbial-dependent metabolism of phenylalanine . As circulating concentrations of the essential amino acid, phenylalanine, were not directly associated with DQ in our study, this suggests that differences in gut microbiome composition impacting PAG formation among children are likely a major determinant of DQ rather than dietary protein intake. Similarly, to our findings, a previous study involving 76 patients with Attention-Deficit/Hyperactivity Disorder (ADHD) and 363 healthy children aged 1–18 years identified an inverse relationship between urinary PAG and ADHD . While the specific pathways contributing to such disorders remain to be fully elucidated, it is known that PAG is structurally similar to catecholamines and can activate adrenergic receptors . The stimuli of adrenergic receptors may have broader implications on behavioral responses, potentially influencing neurological activities . Likewise, in our study, circulating TMAO levels were inversely associated with child DQ. Elevated concentrations of TMAO in plasma and cerebrospinal fluid are also implicated in age-related cognitive dysfunction, neuronal senescence, and synaptic damage in the brain . In addition, its increased levels have been associated with activation of inflammatory pathways and neurodegenerative diseases . Previous studies reported that TMAO can activate astrocytes and microglia and trigger a cascade of inflammatory responses in the brain, induce oxidative stress, superoxide production, and mitochondrial impairment, and cause inhibition of mTOR signaling in the brain . In this context, dysregulation of the mTOR signaling pathway may lead to substantial abnormalities in brain development, contributing to a wide array of neurological disorders, including ASD, seizure, learning impairments, and intellectual disabilities . HA is a glycine conjugate derived from exposure to benzoic acid (i.e. preservative in processed foods), or generated via intestinal microbial fermentation of dietary polyphenols and phenylalanine . Like other co-metabolized species, circulating HA concentrations depend on dietary exposures and host metabolism , in a case-control study involving 65 children with ASD and 20 children with typical development, reported that urinary HA was significantly higher in the ASD group, corroborating the inverse association found with DQ in our study. However, the effect of HA on metabolic health is still controversial as it has been proposed as a potential dietary biomarker for fruit and vegetable consumption in healthy children and adolescents . HA also inhibits the Organic Anion Transporter (OAT) 3 function and contributes to the toxic action of other compounds, including indoxyl sulfate , which may affect cognitive function by disrupting the brain barrier . CS is a product of tyrosine fermentation in the gut involving more than 55 p-cresol producing bacteria prior to hepatic sulfate conjugation . CS was inversely associated with DQ in our study, and it has been studied in the early stages of life, particularly concerning conditions such as ASD . Urinary p-cresol and CS are elevated in ASD-diagnosed children <8 years . Animal models have shown that CS is a gut-derived neurotoxin that can impact neuronal cell structural remodeling even at low doses via oxidative stress and secretion of brain-derived neurotrophic factor . Indeed, p-cresol might impact developmental processes since it is related to impaired dendritic development, synaptogenesis, and synapse function in hippocampal neurons, which are crucial for cognitive and neural development in children . Prior investigations have identified PAG, CS, HA, and TMAO as products of gut microbiota metabolism . Specifically, dietary aromatic amino acids are metabolized by gut microbiota in the large intestine, converting phenylalanine into PAG and HA and tyrosine into CS . In contrast, TMAO originates from trimethylamine (TMA), which is produced from betaine compounds, including γ-butyrobetaine (dC0), choline, and carnitine via gut microbiota co-metabolism that is subsequently oxidized to TMAO in the liver . These metabolic compounds in the bloodstream may elicit physiological responses influencing the central nervous system through direct passage across the blood-brain barrier or indirectly through vagus nerve stimulation . Such dynamics underscore the complex interactions between environmental exposures early in life, such as mode of birth and the child’s diet, and brain development in early childhood . Overall, serum PAG, HA, and TMAO showed a significant average causal mediation effect with dietary fiber intake that was inversely associated with DQ. However, the interpretation of mediation effects is limited by the observational nature of the data, and third variables may explain unexpected relationships between the variables in the analysis. Nevertheless, the results of our mediation analysis are an important step that future studies can build to further investigate the causal pathways leading to optimal DQ levels. The age-dependent associations observed in our study are consistent with the age-related changes in metabolic profile reported in previous studies . Increased urinary TMAO and betaine levels were found in children aged six months, whereas creatine and Crtn levels increased significantly after six months . Similar findings were reported by Gu et al. in a study that included children from newborn to 12 years of age. The urinary Crtn increased with age, whereas glycine, betaine/TMAO, citrate, succinate, and acetone decreased . These changes may reflect a physiological age-dependent process related to the rapid growth occurring in early life . Interestingly, we observed that the child’s age changed the direction of the association between Crtn and MeHis with DQ. Crtn is generated non-enzymatically from creatine and is related to energy production within skeletal muscle tissue, whereas MeHis is related to protein turnover and has been evaluated as a biomarker for the rate of skeletal muscle breakdown . For example, plasma and urinary MeHis are temporally associated with changes to a health-promoting Prudent diet in contrast to a Western diet, whose concentrations are positively correlated with greater self-reported daily protein intake . We hypothesize that higher serum concentrations of Crtn and MeHis in older children (>49 months) may be due to the greater physical activity/mobility needed through the first years . Our study provided valuable insights into the potential role of serum metabolome on ECD for children aged 6–59 months. One of the strengths of this study is the large sample size, which allows for a more comprehensive representation of the population on a national level. Using a subset of 5004 due to cost restrictions did not compromise the representativeness. The sample was randomly selected to constitute the final sample that aimed to represent the total number of children with blood drawn (8829 children). Hence, our efforts were to preserve the original characteristics of the sample and the representativeness of the original sample. Furthermore, our study employed a quantitative targeted and exploratory untargeted metabolomics method. This high-throughput metabolomics platform is strengthened by implementing rigorous quality control measures and batch-correction algorithms, ensuring the high accuracy and reproducibility needed for large-scale epidemiological studies. We used the DQ as a variable for evaluating ECD, which consists of a continuous parameter that integrates developmental milestones attained with the child’s chronological age at its achievement. DQ has been previously used ( and ) has advantages such as enabling the assessment of each item rather than just the final score, as the item set might be biased—meaning there could be an imbalance in the number of activities more commonly achieved among the specified items. Consequently, reaching the maximum score on the scale may be easier for certain age groups. Some limitations are worth mentioning. First, this study did not include the analysis of hydrophobic/water-insoluble lipids, limiting overall metabolome coverage. Also, the inherent limitations of a cross-sectional study prevent us from making causal inferences concerning the temporal relationship between serum metabolic phenotypes and ECD trajectories. Moreover, birth weight and breastfeeding practices were available only for a limited number of participants and were not included in the regression adjustments. Concerning the child’s diet assessment, we estimated dietary diversity and fiber intake based on one1-day food intake reports, with the MDD specifically measuring dietary diversity within diet quality. Lastly, stool microbiome data was not collected from children in ENANI-2019 as it was not a study objective in this large population-based nutritional survey. However, the lack of microbiome data does not reduce the importance/relevance since there is no evidence that microbiome and factors affecting microbiome composition are confounders in the association between serum metabolome and child development. In conclusion, this study represents a pioneering effort in Brazil, a population-based survey targeting children from 6 to 59 months of age that incorporated serum metabolome and ECD analysis. We found that serum PAG, HA, CS, and TMAO were inversely associated with ECD and that age can modify the effect of PAG, HA, TMAO, Crtn, and MeHis on development. Some circulating metabolites associated with DQ are relevant to microbiome health, it is possible that those circulating metabolites may be biomarkers of a beneficial effect of the microbiome on development. These results suggest that a panel of circulating metabolites might offer a preliminary warning of developmental risk and potentially be used as a screening tool to help identify children at risk for developmental delays at early stages of life. We believe the results can contribute to the literature, which should be used to accumulate evidence to overcome knowledge gaps and support the formulation and redirection of public policies aimed at full child growth and development, the promotion of adequate and healthy nutrition and food security; the encouragement, support, and protection of breastfeeding; and the prevention and control of micronutrient deficiencies. Further prospective longitudinal studies, including stool-based microbiome analysis, are warranted to validate our findings and establish targeted intervention biomarkers besides providing further insights into possible mechanistic pathways.
Plasma lipidomic analysis reveals disruption of ether phosphatidylcholine biosynthesis and facilitates early detection of hepatitis B-related hepatocellular carcinoma
10152e61-9e2f-437b-adc3-16830090b92a
11849150
Biochemistry[mh]
Hepatocellular carcinoma (HCC) is one of the most prevalent cancers worldwide, ranking as the third leading cause of cancer-related deaths globally in 2022 . In China, HCC accounted for over 310,000 deaths in 2022, making it the second deadliest cancer type . Major risk factors for HCC include alcohol consumption, diabetes, nonalcoholic steatohepatitis (NASH), and infection by hepatitis viruses, with hepatitis B virus (HBV) being the leading cause, responsible for over half of global HCC cases . Particularly, the HBV infection rate among HCC patients in China reached 92.05% . Despite significant advancements in HCC treatment, particularly with radical hepatectomy, over 70% of HCC patients are diagnosed at advanced stages, when curative treatment options are no longer feasible . Current clinical screening methods for early HCC detection, including liver ultrasound imaging and serum alpha-fetoprotein (AFP) testing, are inadequate. Ultrasound has limited sensitivity for detecting small or early-stage HCC lesions, while AFP testing often fails to distinguish early HCC from other liver diseases such as cirrhosis or hepatitis . For instance, studies have shown that up to 40% of HCC patients have normal AFP levels, and elevated AFP can also occur in non-malignant liver diseases . Advanced imaging modalities like CT and MRI, although more sensitive, are costly and impractical for routine screening in high-risk populations . To overcome these challenges, several prediction scoring systems have been developed to assess HCC risk in HBV-infected populations by incorporating clinical and laboratory parameters, such as age, albumin, and AFP . However, these scoring systems lack the sensitivity and specificity required for early detection. Similarly, multi-omics approaches, such as proteomics and metabolomics, have emerged as promising tools for HCC biomarker discovery. Proteomics studies have provided valuable insights into the molecular mechanisms of HCC, while metabolomics studies have demonstrated significant alterations in metabolic profiles, particularly in blood and tissue samples. However, many of these studies focus on general HCC populations and provide limited insights into the specific metabolic changes underlying HBV-related HCC. Metabolic perturbation is a key feature of cancer. The liver, as the largest metabolic organ, plays a central role in nutrient metabolism, bile acid metabolism, and toxin clearance, making hepatic malignancies highly likely to affect systemic metabolic states . Lipidomics, a rapidly advancing branch of metabolomics, offers a comprehensive view of lipid metabolism and its alterations in disease progression. Unlike previous studies that primarily focus on proteomics or general metabolomics, here we employed lipidomics to analyze hydrophobic metabolites in HBV-infected patients, including those with chronic hepatitis B (CHB), HBV-related liver cirrhosis (LC), and HBV-related HCC. Through multi-omics integration, we identified significant elevations in ether phosphatidylcholines (ether PCs) and confirmed the dysregulation of ether PC biosynthesis pathways in HBV-HCC development. Finally, we developed a diagnostic model based on a panel of ether PCs selected by machine learning, which demonstrated high diagnostic accuracy for detecting early-stage HCC. This study provides a novel, non-invasive, and cost-effective strategy to facilitate early detection and intervention in HBV-HCC patients. Study subjects and plasma collection Subjects diagnosed with CHB, HBV-related LC and HBV-related HCC at Beijing Youan Hospital were enrolled from February, 2024 to April, 2024. Criteria for inclusion are listed as follows: i) Patients aged between 18 and 80 years. ii) Patients diagnosed with chronic hepatitis B (CHB), HBV-related cirrhosis (LC), or HBV-related hepatocellular carcinoma (HCC) based on the guidelines for the management of HBV-related diseases. iii) Patients with a history of chronic hepatitis B and meeting the diagnostic criteria for HBV-related cirrhosis or hepatocellular carcinoma. iv) Willingness to provide informed consent for participation in the study. Criteria for exclusion are listed as follows: i) Patients with severe systemic diseases involving the respiratory, cardiac, or central nervous systems, as well as those with autoimmune hepatitis, congenital, or hereditary liver diseases. ii) Patients with psychiatric or psychological disorders, including anxiety or depression. All HCC subjects were staged according to China liver cancer staging (CNLC) and over two thirds of the HCC subjects were at CNLC stage I. Blood samples of enrolled subjects were collected by EDTA tubes and were kept at 4℃ for less than 6 h before centrifugation to collect plasma. The plasma samples were stored at -80 ℃ until sample preparation for LC–MS analysis. Sample collection and metabolite extraction Quality control (QC) samples were obtained by forming a mixed pool of different samples. For untargeted lipidomic analysis, 100μL of the liquid–liquid extraction solution (chloroform–methanol 2:1, v/v) is added to 25μL of each serum sample including QC sample. Samples are vortexed for 30 s, vibrated at 1200 rpm for 8 min, and centrifuged at 12,000 rpm for 10 min. Lower organic phase containing hydrophobic metabolites are collected into new tubes and evaporated at room temperature under vacuum. The residue is dissolved in 25μL dissolving solution (chloroform–methanol 1:1, v/v), vortexed for 30 s and diluted by adding 75μL diluting solution (isopropanol-acetonitrile-H 2 O 2:1:1, v/v/v). The mixture is then vortexed for 30 s, centrifuged at 12,000 rpm for 15 min, and supernatant transferred into vials for LC–MS analysis. Untargeted lipidomic analysis Untargeted lipidomic analysis was performed using liquid chromatography-mass spectrometry (LC–MS). The Ultimate 3000 UHPLC system (Thermo) and Acquity CSH C18 column (100 × 2.1 mm i.d., 2.5 μm, Waters) were used for reversed phase liquid chromatographic separation. Column temperature was set to 50 ℃. Acetonitrile (LC–MS grade, Fisher Scientific, USA)-water (60/40, v/v) with 10 mM ammonium acetate (Sigma-Aldrich, St. Louis, MO, USA) and 0.1% formic acid (Sigma-Aldrich, St. Louis, MO, USA) was used as mobile phase A, and isopropanol (LC–MS grade, Fisher Scientific, USA)–acetonitrile (90/10, v/v) with 10 mM ammonium acetate and 0.1% formic acid was used as mobile phase B. The flow rate was set to 0.3 mL/min. The gradient of liquid phase was set as follows: 0 min—40% B; 2 min—43% B; 2.1 min—50% B; 10 min—60% B; 10.1 min—75% B; 16 min – 99% B; 17 min—99% B; 18 min—40% B; and 19 min—40% B. Q-Exactive (hybrid quadrupole-Orbitrap mass spectrometer) coupled with heated electrospray ionization (HESI) source (Thermo Fisher Scientific) was used for mass analysis. Data dependent acquisition (DDA) mode was used. Each acquisition cycle consists of one survey scan (MS 1 scan) at 35,000 resolution from 190 to 1200 m /z , followed by ten MS/MS scans in HCD mode at 17,500 resolution. MS/MS parameters were set as follows: Automatic gain control target (AGC), 5e6 (maximum injection time 80 ms) for MS 1 scan and 1e5 (maximum injection time 70 ms) for MS/MS scan; Fixed first mass, 50 m /z ; Dynamic exclusion, 8 s; Stepped normalized collision energy (step-NCE) to 15, 30, and 45. HESI ion source parameters were set as follows: spray voltage, 3.3 kV for positive ion mode and 3.0 kV for negative ion mode; ion source sheath gas, 40; aux gas, 10; capillary temperature, 320 ℃; probe heater temperature, 300 ℃; S-lens RF level, 55.5. QC samples were analyzed repeatedly in the batch of sample acquisition to evaluate the stability of the LC–MS instrument. All samples were acquired in the positive–negative switching ion mode. Peak extraction, alignment, identification and quantification from raw data files were performed using the MS-DIAL software (version 4.70). Specifically, internal MS/MS spectra lipid libraries in MS-DIAL software was used for lipid identification. Characteristic fragments of selected lipid features were manually checked in MS/MS spectra from raw data files. SVM-based feature selection SVM model (liblinear 2.20) was built to classify categories of enrolled subjects and select important features as previously reported . SVM was employed to build classification models in 1000 experiments of fourfold cross validation and to generate the weight representing importance in classification for all features. [12pt]{minimal} $$},b}{}}||}^{2}}{2}, s.t.({{}}^{T}{x}_{i}+b){y}_{i} 1$$ min w , b | | w | | 2 2 , s . t . ( w T x i + b ) y i ≥ 1 As shown in above equation, the inferred [12pt]{minimal} $${}$$ w could be regarded as the importance weight for each feature. A validation operation of feature selection was conducted to select top-ranking important features with the highest classification accuracy. Top 50 important features were analyzed to generate predictive models for feature selection, which was performed by increasing and selecting from the top-ranking feature one by one, e.g., selecting the Top 1 feature as the first model, Top 2 features as the second model, and then iterated to Top-N features as the N-th model. For performance evaluation, mean accuracies for each model (N = 50) in feature selection were calculated after 100 times iterations of fourfold cross-validation. Measurement of AFP Serum AFP levels were measured using an automated chemiluminescence immunoassay (CLIA) system according to the manufacturer's protocol (Abbott ARCHITECT i2000SR, Abbott Laboratories, Chicago, IL, USA). The detection range of the assay was 0.5–350 ng/mL. All samples were processed in duplicate, and strict quality control was maintained throughout the testing process. Multi-omic analysis Datasets of HCC from The Cancer Genome Atlas (TCGA) and Clinical Proteomic Tumor Analysis Consortium (CPTAC) databases were used for RNA and protein analysis, respectively, via the integrated online analysis platform at https://ualcan.path.uab.edu/index.html . The RNA levels and protein levels of HCC patients in primary tumor tissue compared to normal adjacent tissue were analyzed. Basic patient characteristics of original datasets in TCGA (TCGA-LIHC) and CPTAC(PDC000198) which we used were exported from official websites and summarized in Table S7 of Additional file 2. Statistical analysis Metaboanalyst ( https://www.metaboanalyst.ca/ ) was used for preliminary identification of differential lipid features by ANOVA, PCA, PLSDA and hierarchy cluster analysis. Features with FDR < 0.05 (Benjamini–Hochberg method) in ANOVA analysis were subjected to further analysis. Corresponding R2Y and Q2Y values of PLSDA analysis were listed in Table S6 of Additional file 2. The MATLAB software (R2022b) was used to perform SVM-based feature selection, SVM modeling and receiver operator characteristic (ROC) analysis. The GraphPad Prism 9.0.0 and R software were used for data analysis and visualization, and R packages including “Mfuzz” “psych” “Rtsne” were used for mfuzz clustering, correlation analysis and t-SNE visualization, respectively. Modeling by multiple ML algorithms was performed using the following functions in the tidymodels framework of R software, including XGBoost (boost_tree, Decision Tree (decision_tree, Logistic Regression (logistic_deg, KNN (nearest_neighbor, Random Forest (rand_forest, and SVM (svm_linear. Subjects diagnosed with CHB, HBV-related LC and HBV-related HCC at Beijing Youan Hospital were enrolled from February, 2024 to April, 2024. Criteria for inclusion are listed as follows: i) Patients aged between 18 and 80 years. ii) Patients diagnosed with chronic hepatitis B (CHB), HBV-related cirrhosis (LC), or HBV-related hepatocellular carcinoma (HCC) based on the guidelines for the management of HBV-related diseases. iii) Patients with a history of chronic hepatitis B and meeting the diagnostic criteria for HBV-related cirrhosis or hepatocellular carcinoma. iv) Willingness to provide informed consent for participation in the study. Criteria for exclusion are listed as follows: i) Patients with severe systemic diseases involving the respiratory, cardiac, or central nervous systems, as well as those with autoimmune hepatitis, congenital, or hereditary liver diseases. ii) Patients with psychiatric or psychological disorders, including anxiety or depression. All HCC subjects were staged according to China liver cancer staging (CNLC) and over two thirds of the HCC subjects were at CNLC stage I. Blood samples of enrolled subjects were collected by EDTA tubes and were kept at 4℃ for less than 6 h before centrifugation to collect plasma. The plasma samples were stored at -80 ℃ until sample preparation for LC–MS analysis. Quality control (QC) samples were obtained by forming a mixed pool of different samples. For untargeted lipidomic analysis, 100μL of the liquid–liquid extraction solution (chloroform–methanol 2:1, v/v) is added to 25μL of each serum sample including QC sample. Samples are vortexed for 30 s, vibrated at 1200 rpm for 8 min, and centrifuged at 12,000 rpm for 10 min. Lower organic phase containing hydrophobic metabolites are collected into new tubes and evaporated at room temperature under vacuum. The residue is dissolved in 25μL dissolving solution (chloroform–methanol 1:1, v/v), vortexed for 30 s and diluted by adding 75μL diluting solution (isopropanol-acetonitrile-H 2 O 2:1:1, v/v/v). The mixture is then vortexed for 30 s, centrifuged at 12,000 rpm for 15 min, and supernatant transferred into vials for LC–MS analysis. Untargeted lipidomic analysis was performed using liquid chromatography-mass spectrometry (LC–MS). The Ultimate 3000 UHPLC system (Thermo) and Acquity CSH C18 column (100 × 2.1 mm i.d., 2.5 μm, Waters) were used for reversed phase liquid chromatographic separation. Column temperature was set to 50 ℃. Acetonitrile (LC–MS grade, Fisher Scientific, USA)-water (60/40, v/v) with 10 mM ammonium acetate (Sigma-Aldrich, St. Louis, MO, USA) and 0.1% formic acid (Sigma-Aldrich, St. Louis, MO, USA) was used as mobile phase A, and isopropanol (LC–MS grade, Fisher Scientific, USA)–acetonitrile (90/10, v/v) with 10 mM ammonium acetate and 0.1% formic acid was used as mobile phase B. The flow rate was set to 0.3 mL/min. The gradient of liquid phase was set as follows: 0 min—40% B; 2 min—43% B; 2.1 min—50% B; 10 min—60% B; 10.1 min—75% B; 16 min – 99% B; 17 min—99% B; 18 min—40% B; and 19 min—40% B. Q-Exactive (hybrid quadrupole-Orbitrap mass spectrometer) coupled with heated electrospray ionization (HESI) source (Thermo Fisher Scientific) was used for mass analysis. Data dependent acquisition (DDA) mode was used. Each acquisition cycle consists of one survey scan (MS 1 scan) at 35,000 resolution from 190 to 1200 m /z , followed by ten MS/MS scans in HCD mode at 17,500 resolution. MS/MS parameters were set as follows: Automatic gain control target (AGC), 5e6 (maximum injection time 80 ms) for MS 1 scan and 1e5 (maximum injection time 70 ms) for MS/MS scan; Fixed first mass, 50 m /z ; Dynamic exclusion, 8 s; Stepped normalized collision energy (step-NCE) to 15, 30, and 45. HESI ion source parameters were set as follows: spray voltage, 3.3 kV for positive ion mode and 3.0 kV for negative ion mode; ion source sheath gas, 40; aux gas, 10; capillary temperature, 320 ℃; probe heater temperature, 300 ℃; S-lens RF level, 55.5. QC samples were analyzed repeatedly in the batch of sample acquisition to evaluate the stability of the LC–MS instrument. All samples were acquired in the positive–negative switching ion mode. Peak extraction, alignment, identification and quantification from raw data files were performed using the MS-DIAL software (version 4.70). Specifically, internal MS/MS spectra lipid libraries in MS-DIAL software was used for lipid identification. Characteristic fragments of selected lipid features were manually checked in MS/MS spectra from raw data files. SVM model (liblinear 2.20) was built to classify categories of enrolled subjects and select important features as previously reported . SVM was employed to build classification models in 1000 experiments of fourfold cross validation and to generate the weight representing importance in classification for all features. [12pt]{minimal} $$},b}{}}||}^{2}}{2}, s.t.({{}}^{T}{x}_{i}+b){y}_{i} 1$$ min w , b | | w | | 2 2 , s . t . ( w T x i + b ) y i ≥ 1 As shown in above equation, the inferred [12pt]{minimal} $${}$$ w could be regarded as the importance weight for each feature. A validation operation of feature selection was conducted to select top-ranking important features with the highest classification accuracy. Top 50 important features were analyzed to generate predictive models for feature selection, which was performed by increasing and selecting from the top-ranking feature one by one, e.g., selecting the Top 1 feature as the first model, Top 2 features as the second model, and then iterated to Top-N features as the N-th model. For performance evaluation, mean accuracies for each model (N = 50) in feature selection were calculated after 100 times iterations of fourfold cross-validation. Serum AFP levels were measured using an automated chemiluminescence immunoassay (CLIA) system according to the manufacturer's protocol (Abbott ARCHITECT i2000SR, Abbott Laboratories, Chicago, IL, USA). The detection range of the assay was 0.5–350 ng/mL. All samples were processed in duplicate, and strict quality control was maintained throughout the testing process. Datasets of HCC from The Cancer Genome Atlas (TCGA) and Clinical Proteomic Tumor Analysis Consortium (CPTAC) databases were used for RNA and protein analysis, respectively, via the integrated online analysis platform at https://ualcan.path.uab.edu/index.html . The RNA levels and protein levels of HCC patients in primary tumor tissue compared to normal adjacent tissue were analyzed. Basic patient characteristics of original datasets in TCGA (TCGA-LIHC) and CPTAC(PDC000198) which we used were exported from official websites and summarized in Table S7 of Additional file 2. Metaboanalyst ( https://www.metaboanalyst.ca/ ) was used for preliminary identification of differential lipid features by ANOVA, PCA, PLSDA and hierarchy cluster analysis. Features with FDR < 0.05 (Benjamini–Hochberg method) in ANOVA analysis were subjected to further analysis. Corresponding R2Y and Q2Y values of PLSDA analysis were listed in Table S6 of Additional file 2. The MATLAB software (R2022b) was used to perform SVM-based feature selection, SVM modeling and receiver operator characteristic (ROC) analysis. The GraphPad Prism 9.0.0 and R software were used for data analysis and visualization, and R packages including “Mfuzz” “psych” “Rtsne” were used for mfuzz clustering, correlation analysis and t-SNE visualization, respectively. Modeling by multiple ML algorithms was performed using the following functions in the tidymodels framework of R software, including XGBoost (boost_tree, Decision Tree (decision_tree, Logistic Regression (logistic_deg, KNN (nearest_neighbor, Random Forest (rand_forest, and SVM (svm_linear. Characteristics of study subjects To uncover the circulatory metabolic changes that occur in cancer development in high-risk population with HBV infection, subjects diagnosed with CHB, HBV-related LC and HBV-related HCC were enrolled in this study. Summarized and detailed characteristics of enrolled subjects are listed in Table and Table , respectively. Basic information including age, sex and HBV infection were of no statistical significance among the three groups. To obtain a full picture of the metabolic disturbance in the process of tumorigenesis in HBV-infected population, we sought to characterize the global plasma lipid metabolites in CHB, LC and HCC participants by lipidomic analysis. We identified 1728 features in the positive electrospray ionization mode (ESI +) and 939 features in the negative electrospray ionization mode (ESI-). Plasma lipidomic profiling unveils differential lipids from multiple lipid classes For a global view of the obtained lipidomic data, we performed PLSDA analysis and t-SNE analysis (Fig. ) to visualize all lipids identified. As shown by our data, CHB illustrated obvious separation from the other two groups, while the group of LC and HCC exhibited less difference. Two-group comparisons were then performed to visualize global difference between CHB/HCC and LC/HCC, respectively (Fig. A-B). Corresponding R2Y and Q2Y values in PLSDA analysis are listed in Table S6 of Additional File 2, illustrating a good predictive power for discriminating HCC from CHB (R2Y and Q2Y > 0.3) while a poor predictive power for discriminating HCC from LC (the difference between R2Y and Q2Y was larger than 0.3). Next, ANOVA analysis was performed to identify the differential lipid features (FDR < 0.05) among the three groups, and 508 features in ESI + mode and 359 features in ESI- mode were subjected to further analysis. Lipid classes of all differential features were then observed. The number of differential lipid features in each lipid class detected in ESI + or ESI- mode is listed in Table S3, and lipid classes with the top 5 most features in each ion mode are demonstrated in Fig. . Plasma levels of ether PCs closely correlate with hepatic carcinogenesis To identify the crucial metabolic changes during hepatic carcinogenesis in subjects with HBV infection, we thereby performed mfuzz clustering using the differential lipids to observe the variation patterns of lipids in different clusters. Five clusters were obtained in ESI + or ESI- mode (Fig. A-B). In clusters illustrating consecutive decline along the trajectory of CHB-LC-HCC as disease development, including cluster 2 in ESI + mode and cluster 1 in ESI- mode, phosphatidylcholine (PC) was identified as the lipid class with the most features (Fig. C). While in clusters illustrating consecutive elevation, including cluster 3 in ESI + mode and cluster 5 in ESI- mode, ether PC was identified as the lipid class with the most features (Fig. D). Abundances of PCs and ether PCs in ESI- cluster 1 and ESI + cluster 3, respectively, are presented in Fig. E. Collectively, these data indicate that increased ether PC levels and decreased PC levels in plasma possibly account for the malignant conversion in hepatitis B-related liver diseases. Integrated analysis by multi-omic data illustrates dysregulation of the ether PC biosynthetic pathway To further validate the role of ether PC biosynthesis in hepatocarcinogenesis, we then investigated the expression of key enzymes modulating biosynthesis of ether PC. GNPAT and AGPS, catalyzing key steps of ether bond formation in liver peroxisomes , illustrated obvious upregulation in HCC tissue samples compared to normal controls in both RNA levels (Fig. A) and protein levels (Fig. B). PEMT, accounting for conversion from ether PE to ether PC, markedly declined in HCC samples, while CHPT1, accounting for conversion from ether DG to ether PC, significantly increased in HCC samples (Fig. A-B). FAR1 and FAR2, which have been previously known to show negative responses to levels of ether PC, showed decreased RNA levels in HCC while no obvious change in protein levels (Fig. A-B). The alterations of these key enzymes in ether PC biosynthesis are summarized and illustrated in Fig. C. Together, these data suggest that biosynthesis of ether PC is indeed enhanced in the process of hepatocarcinogenesis. Ether PCs show superior classification performances in machine-learning models compared to alpha fetoprotein Given that our data demonstrated a critical role of ether PC in hepatocarcinogenesis, we developed SVM-based machine-learning models using abundances of all identified ether PCs to detect HCC subjects from CHB or LC subjects. Firstly, a feature selection strategy was performed to determine the importance of individual ether PC lipid feature and the optimal number of lipid feature combination in the models. The average accuracies of fourfold cross validation models in 100 tests using 1–50 top-weighted features are illustrated in Fig. A-B. For discriminating HCC from CHB, using 11 top-weighted features, the model reached the highest accuracy of 81.83% (Fig. A). For discriminating HCC from LC, using 31 top-weighted features, the model reached the highest accuracy of 77.86% (Fig. B). The selected features and corresponding weight in SVM models are listed in Table S4. The ROC curve for classification of HCC from CHB using 11 selected features for SVM modeling showed an area under ROC curve (AUC) of 0.849 (Fig. C), and the ROC curve for classification of HCC from LC using 31 selected features for SVM modeling showed an AUC of 0.829 (Fig. D). To further determine the best ML algorithm for modeling, we built machine learning models based on several ML algorithms and depicted ROC curves for comparison. The performances of modeling using selected features were compared by AUC (Fig. of Additional file 1). We can see that compared to other algorithms, the SVM model illustrated the second highest AUC for discriminating CHB vs. HCC and the highest AUC for discriminating LC vs. HCC. The diagnostic performance of alpha fetoprotein (AFP) was also evaluated for CHB vs. HCC (Fig. E) and LC vs. HCC (Fig. F) by ROC curves, respectively, demonstrating limited value of AFP to detect HCC in our study cohorts. Together, these results indicate that SVM modeling using selected ether PCs shows better performance to detect HBV-related HCC than the conventional marker AFP. Levels of plasma ether PCs are significantly correlated with indicators of liver function To further investigate the biological significance of ether PCs, the correlations between paired plasma ether PC levels and clinical results of examination or lab tests were determined by spearman correlation. The ether PCs in ESI + cluster3 mentioned earlier with an elevating trend in the process of hepatocarcinogenesis and clinical indicators for liver function were included for analysis (Fig. and Table S5). The ether PCs that most significantly associated with indicators of liver function included PC O-33:1, PC O-32:2, PC O-36:4, PC O-36:2, PC O-32:0, PC O-34:1, PC O-32:1 and PC O-38:6. Our data revealed massive correlation between ether PCs and indicators of liver function reflecting liver cell injury, liver excretion function, liver reserve function and liver interstitial changes. We unveil the value of plasma ether PCs in evaluating the liver function, which may ultimately facilitate early detection of hepatocarcinogenesis in HBV-infected population. To uncover the circulatory metabolic changes that occur in cancer development in high-risk population with HBV infection, subjects diagnosed with CHB, HBV-related LC and HBV-related HCC were enrolled in this study. Summarized and detailed characteristics of enrolled subjects are listed in Table and Table , respectively. Basic information including age, sex and HBV infection were of no statistical significance among the three groups. To obtain a full picture of the metabolic disturbance in the process of tumorigenesis in HBV-infected population, we sought to characterize the global plasma lipid metabolites in CHB, LC and HCC participants by lipidomic analysis. We identified 1728 features in the positive electrospray ionization mode (ESI +) and 939 features in the negative electrospray ionization mode (ESI-). For a global view of the obtained lipidomic data, we performed PLSDA analysis and t-SNE analysis (Fig. ) to visualize all lipids identified. As shown by our data, CHB illustrated obvious separation from the other two groups, while the group of LC and HCC exhibited less difference. Two-group comparisons were then performed to visualize global difference between CHB/HCC and LC/HCC, respectively (Fig. A-B). Corresponding R2Y and Q2Y values in PLSDA analysis are listed in Table S6 of Additional File 2, illustrating a good predictive power for discriminating HCC from CHB (R2Y and Q2Y > 0.3) while a poor predictive power for discriminating HCC from LC (the difference between R2Y and Q2Y was larger than 0.3). Next, ANOVA analysis was performed to identify the differential lipid features (FDR < 0.05) among the three groups, and 508 features in ESI + mode and 359 features in ESI- mode were subjected to further analysis. Lipid classes of all differential features were then observed. The number of differential lipid features in each lipid class detected in ESI + or ESI- mode is listed in Table S3, and lipid classes with the top 5 most features in each ion mode are demonstrated in Fig. . To identify the crucial metabolic changes during hepatic carcinogenesis in subjects with HBV infection, we thereby performed mfuzz clustering using the differential lipids to observe the variation patterns of lipids in different clusters. Five clusters were obtained in ESI + or ESI- mode (Fig. A-B). In clusters illustrating consecutive decline along the trajectory of CHB-LC-HCC as disease development, including cluster 2 in ESI + mode and cluster 1 in ESI- mode, phosphatidylcholine (PC) was identified as the lipid class with the most features (Fig. C). While in clusters illustrating consecutive elevation, including cluster 3 in ESI + mode and cluster 5 in ESI- mode, ether PC was identified as the lipid class with the most features (Fig. D). Abundances of PCs and ether PCs in ESI- cluster 1 and ESI + cluster 3, respectively, are presented in Fig. E. Collectively, these data indicate that increased ether PC levels and decreased PC levels in plasma possibly account for the malignant conversion in hepatitis B-related liver diseases. To further validate the role of ether PC biosynthesis in hepatocarcinogenesis, we then investigated the expression of key enzymes modulating biosynthesis of ether PC. GNPAT and AGPS, catalyzing key steps of ether bond formation in liver peroxisomes , illustrated obvious upregulation in HCC tissue samples compared to normal controls in both RNA levels (Fig. A) and protein levels (Fig. B). PEMT, accounting for conversion from ether PE to ether PC, markedly declined in HCC samples, while CHPT1, accounting for conversion from ether DG to ether PC, significantly increased in HCC samples (Fig. A-B). FAR1 and FAR2, which have been previously known to show negative responses to levels of ether PC, showed decreased RNA levels in HCC while no obvious change in protein levels (Fig. A-B). The alterations of these key enzymes in ether PC biosynthesis are summarized and illustrated in Fig. C. Together, these data suggest that biosynthesis of ether PC is indeed enhanced in the process of hepatocarcinogenesis. Given that our data demonstrated a critical role of ether PC in hepatocarcinogenesis, we developed SVM-based machine-learning models using abundances of all identified ether PCs to detect HCC subjects from CHB or LC subjects. Firstly, a feature selection strategy was performed to determine the importance of individual ether PC lipid feature and the optimal number of lipid feature combination in the models. The average accuracies of fourfold cross validation models in 100 tests using 1–50 top-weighted features are illustrated in Fig. A-B. For discriminating HCC from CHB, using 11 top-weighted features, the model reached the highest accuracy of 81.83% (Fig. A). For discriminating HCC from LC, using 31 top-weighted features, the model reached the highest accuracy of 77.86% (Fig. B). The selected features and corresponding weight in SVM models are listed in Table S4. The ROC curve for classification of HCC from CHB using 11 selected features for SVM modeling showed an area under ROC curve (AUC) of 0.849 (Fig. C), and the ROC curve for classification of HCC from LC using 31 selected features for SVM modeling showed an AUC of 0.829 (Fig. D). To further determine the best ML algorithm for modeling, we built machine learning models based on several ML algorithms and depicted ROC curves for comparison. The performances of modeling using selected features were compared by AUC (Fig. of Additional file 1). We can see that compared to other algorithms, the SVM model illustrated the second highest AUC for discriminating CHB vs. HCC and the highest AUC for discriminating LC vs. HCC. The diagnostic performance of alpha fetoprotein (AFP) was also evaluated for CHB vs. HCC (Fig. E) and LC vs. HCC (Fig. F) by ROC curves, respectively, demonstrating limited value of AFP to detect HCC in our study cohorts. Together, these results indicate that SVM modeling using selected ether PCs shows better performance to detect HBV-related HCC than the conventional marker AFP. To further investigate the biological significance of ether PCs, the correlations between paired plasma ether PC levels and clinical results of examination or lab tests were determined by spearman correlation. The ether PCs in ESI + cluster3 mentioned earlier with an elevating trend in the process of hepatocarcinogenesis and clinical indicators for liver function were included for analysis (Fig. and Table S5). The ether PCs that most significantly associated with indicators of liver function included PC O-33:1, PC O-32:2, PC O-36:4, PC O-36:2, PC O-32:0, PC O-34:1, PC O-32:1 and PC O-38:6. Our data revealed massive correlation between ether PCs and indicators of liver function reflecting liver cell injury, liver excretion function, liver reserve function and liver interstitial changes. We unveil the value of plasma ether PCs in evaluating the liver function, which may ultimately facilitate early detection of hepatocarcinogenesis in HBV-infected population. In this study, we employed a lipidomic analytic approach to delineate the global circulatory lipid metabolites associated with hepatocarcinogenesis in a cohort with HBV infection. Through mfuzz clustering, we identified marked variations in plasma lipids, particularly in the class of ether phosphatidylcholines (ether PCs), and further verified enhanced biosynthesis of ether PCs by integrating multi-omic datasets, including transcriptomic and proteomic data from HCC tissue samples. Ultimately, we developed a diagnostic model utilizing ether PCs selected via a machine-learning-based feature selection strategy, which demonstrated high efficiency in differentiating HCC patients from CHB and LC subjects. In the field of HCC diagnosis, various methods have been explored. Traditional methods based on serum biomarkers, such as alpha-fetoprotein (AFP), are widely used in clinical practice but have significant limitations in sensitivity and specificity, especially for early HCC detection. Other clinical methods including ultrasound imaging, CT and MRI, also demonstrated deficiency in sensitivity, cost and availability for routine screening, highlighting the urgency for novel biomarkers and diagnostic methods. The role of metabolic drivers in tumorigenesis have been recently reviewed , that metabolic remodeling may begin at stages of early tumorigenesis, providing the theoretical basis for identifying metabolite markers for early detection of cancer. Multiple previous metabolomic studies have identified potential markers of HCC , yet limited studies focused on HBV-related HCC in well-designed schemes. Here we uncover new markers of HBV-related HCC via lipidomic analysis in patients with HBV infection. The samples we used for this study were collected within a short time interval, and subjects in groups of CHB, LC and HCC were matched in age, sex and HBV infection, excluding potential confounding factors to ensure the identification of truly important metabolite features. Moreover, the majority of subjects in the HCC group are at early stages, underscoring the value of selected markers for early HCC detection. Studies on HCC diagnosis by machine learning have been recently reviewed , in which various datatypes were used, including clinical data, imaging data, pathology data, and gene sequencing data. Despite that multiple studies have used plasma lipidomics to explore novel markers of early HCC , few studies have used combined methods of plasma lipidomics and machine learning to identify novel diagnostic markers. Lewinska et. al employed lipidomics and ML-based feature selection in detection of NAFLD-HCC and identified increased fatty acid uptake in NAFLD-HCC. Powell et. al analyzed serum metabolomics and lipidomics in a small cohort of 28 HCC and 30 cirrhosis with ML algorithms, while limited coverage of metabolites and lipids and sample numbers might lead to over-optimistic performances of ML classifications due to model overfitting. Here in this study, aiming to identify lipid characteristics and potential diagnostic markers of HBV-related HCC, a relatively larger cohort of 61 CHB, 57 LC and 57 HCC was established and ML-based feature selection was performed to enhance the robustness of selected markers. Regarding the role of lipid metabolism in HCC development, previous studies have revealed disrupted lipid metabolism and significant alteration of multiple lipid classes . Decrease of circulating polyunsaturated PCs was identified in HBV-related HCC , which is in consistency with our results that most PCs in ESI- cluster 1 with consecutive decline were polyunsaturated. Moreover, declined levels of LPC were observed in HCV-related HCC , which is also the case in HBV-related HCC as our data suggested, indicating that downregulation of LPC might be a common feature of hepatitis virus-related HCC. Here in this study, we specifically targeted HBV-related HCC and controlled for confounding factors such as age, sex, and HBV status. Our results are consistent with these findings that lipid remodeling occurs during liver disease progression, yet we provide novel evidence linking ether PC dysregulation with hepatocarcinogenesis and we uniquely highlight ether PCs as potential biomarkers with significant diagnostic implications for HBV-HCC. Ether PC, also termed as 1-O-alkyl-2-acyl-GPC, is a type of peroxisome-derived glycerophospholipid with the acyl chain at the sn-1 position attached to the backbone by an ether bond , playing important roles in membrane structuring and cell signaling. Significant elevation of ether PC has been reported in obesity and systemic inflammation , raising the possibility that elevated ether PC levels reflect general poor health rather than cancer-specific metabolic remodeling. To address this, we propose future studies to control confounding factors such as BMI, inflammation, and metabolic comorbidities. Additionally, the role of plasmalogens, a subclass of ether glycerophospholipids containing a vinyl-ether linked alkyl chain at the sn-1 position, have been recently identified to play a role in neurodegenerative and cardiometabolic diseases . Given that the role of ether lipids in membrane trafficking has been proposed , combined with previous evidence that impaired membrane trafficking in hepatocytes may lead to HCC and increased ether PC in HCC patients according to our data, we hypothesize that increased ether PC might promote tumorigenesis via dysregulated membrane trafficking. Considering that the specific role of ether PCs in hepatocarcinogenesis remains poorly understood, our data may provide future directions for investigations into the biological roles of ether PCs. While our findings demonstrate the diagnostic potential of ether PCs, we acknowledge the challenges and limitations inherent to lipidomics, as well as the need for further validation. Firstly, as a single-centered study, our study is limited in sample size, and the conclusions and the utility of the selected markers derived from this cohort still need further validation in external cohorts from other medical centers. Secondly, the absence of chemical standard or isotope-labeled standard for each lipid feature might limit the precision of ID confirmation and quantitation of selected ether PCs. Additionally, the identification of lipids heavily depends on the software and libraries used for data analysis. Although we utilized the MS-DIAL software with integrated lipid libraries, the variability among software platforms could introduce uncertainties. These limitations underscore the need for cross-validation across platforms and the inclusion of robust standards in future studies . Furthermore, novel separation strategies and techniques such as ion mobility and ozone-mediated cleavage and derivatization may identify the structures of indicated markers more precisely , thus the translational opportunity of selected markers in varied cohorts would be eventually enhanced. Taken together, we identify a role of ether PC in hepatocarcinogenesis and show the value of plasma ether PCs in early detection of HCC in patients with HBV-related liver diseases in this cohort. Our data may provide novel targets for early intervention of malignant conversion and a novel method for early detection of HCC in the high-risk population with HBV infection. Additional file 1: Fig. S1. Overview of the results of untargeted lipidomic analysis by PLSDA visualization (A) and T-SNE visualization (B) in the groups of CHB, HCC and LC. Fig. S2. Comparison of ROC curves of classifying (A) CHB from HCC and (B) LC from HCC by ML algorithms including extreme gradient boosting (XGboost), decision tree (DT), logistic regression (LR), neural network (NNet), K-nearest neighbor (KNN), ramdom forest (RF) and support vector machine (SVM). Additional file 2: Table S1. Summary of clinical characteristics and lab tests results of enrolled subjects. Table S2. Clinical characteristics and lab tests of individual study subjects. Table S3. Number of differential lipids in each lipid class detected in ESI + or ESI- ion mode. Table S4. Selected ether PCs and corresponding weights for SVM modeling of CHB vs. HCC or LC vs. HCC. Table S5. Correlation coefficients between ether PCs and clinical lab tests reflecting liver function. Table S6. Parameters of PLSDA analysis. Table S7. Summary of characteristics of HCC subjects from TCGA and CPTAC datasets.
Is longer really better? Results of a retrospective real-life cohort study evaluating the benefit of adding a weekly educational session to a traditional 8-week home-based pulmonary rehabilitation programme in people with COPD
bce72212-a4f8-489a-960a-58070493b173
11749871
Patient Education as Topic[mh]
Pulmonary rehabilitation (PR), including education, motivational support to behaviour changes and exercise training, is a well-recognised interdisciplinary intervention that is highly effective at improving dyspnoea, health-related quality of life (HRQoL) and exercise tolerance in people with moderate to severe chronic obstructive pulmonary disease (COPD). PR programmes are commonly outpatient based, delivered in twice or thrice once-weekly supervised sessions during 6–12 weeks for a total of 16–36 sessions. French PR programmes are following the European guidelines, and 20 sessions are recommended for maximum benefits. A minority of patients have access to centre-based PR, and although home-based PR accounts for less than 5% of delivered interventions, this setting could provide increased capacity by eliminating barriers that affect outpatient PR attendance such as travelling distance or long-term oxygen therapy. In COPD, home-based PR is feasible and conducts to the same benefits at short and long terms, as the inpatient or outpatient programme. The number of home-based supervised PR sessions is often lower than the one in centre-based PR, and patients need to be rapidly empowered to perform on their own additional unsupervised weekly exercise training to achieve national recommendations. Despite scientific and clinical consensus on duration of outpatient programme (8–12 weeks), the optimal number of supervised PR sessions remains debated as the available evidences are still insufficient. A few randomised controlled trials (RCTs) published 20 years ago have addressed this topic as the primary study outcome and were computed in a systematic review. However, due to heterogeneity in PR programme duration and outcomes assessed, the authors were not able to perform a meta-analysis; therefore, no recommendation was made. While the overall conclusion indicated that longer programmes (20–72 weeks), therefore those with more supervised PR sessions, may have slightly better improvements in health-related quality of life than shorter interventions (8–12 weeks), two trials showed that whether it was once-weekly (6 sessions) compared with twice weekly (12 sessions) or 4 weeks (8 sessions) compared with 7 weeks (14 sessions), the shortest setting showed similar benefits on health-related quality of life and exercise capacity than longer programmes. Beyond the number of supervised sessions, these trials highlighted the importance of unsupervised but structured home-based training sessions tailored and adjusted by healthcare professionals. However, short-term benefits were not always maintained 6 months after both short and long programmes, highlighting the crucial role of a meaningful motivational support from the outset of PR and questioning PR maintenance strategies. Given these results and the consensus that needs to be made between patients, health professionals and policy makers, it is important to establish whether less-frequent supervised home-based PR sessions could achieve the well-known benefits of PR. Therefore, the objective of this real-life retrospective study was to evaluate the short and long term benefits of adding one educational home-based session per week (8 PR sessions+8 educational sessions) to a traditional 8-week home-based PR programme, on health-related quality of life, dyspnoea, anxiety and depressive symptoms, fatigue, exercise tolerance and functional capacity in people with COPD. Primary hypothesis was that 8 home-based supervised sessions will be equivalent to 16 home-based supervised sessions at both short and long term after PR. Study design and participants This was a retrospective study conducted on prospectively collected data, from January 2010 to December 2021. Eligible individuals, aged >18 years were referred to the home-based PR programme ( FormAction Santé , Pérenchies, France) by their pulmonologist who was responsible for documenting the presence of COPD according to the Global Initiative for Chronic Obstructive Lung Disease (GOLD) classification system (inclusion criteria) and validating that the participants were absent of cardiovascular contraindications (unstable and uncontrolled cardiovascular comorbidities despite treatment) to exercise training (exclusion criteria). All participants received an 8-week personalised home PR programme, including a weekly 90-min supervised session as previously described (8 supervised sessions in total). A subgroup of participants received one additional supervised home session per week (8 PR sessions+8 educational sessions), offered by another healthcare company ( Santélys , France), which cares for people with COPD requiring home-based hospitalisation. These eight additional supervised home sessions included education and motivational support for daily physical activities and walking. Therefore, participants were retrospectively divided into two groups: group 1 receiving 8 supervised sessions (once-weekly) and group 2 receiving 16 supervised sessions (twice-weekly). Retrospective exclusion criteria were FEV 1 >80% of predicted value, spirometry missing data or a second PR programme performed less than a year after the first one. Human Research Ethics approval was provided by the observational research protocol evaluation committee of the French Language Society of Pulmonology (CEPRO 2021–054), who approved the retrospective analysis. All participants signed a written informed consent before the start of the programme. Home-based PR programme The programme started with an evaluation of the patient’s needs and expectations leading to the formulation of a personalised plan (learning needs assessment). Physical training, educational, motivational and self-management plans were designed and implemented through a collaborative process between the PR team, the patient and his/her caregiver. Apart from the weekly visit of the team member who supervised the sessions during the first 8 weeks, participants were expected to perform, on their own, personalised physical training and self-management plan the rest of the week and during the 1-year follow-up period, during which there was no supervised maintenance strategy by the PR team. All the PR healthcare professionals received training in the principles of behaviour change and motivational communication skills which were used to encourage health-promoting behaviour change. Education and self-management interventions were adapted to respond to individual’s needs, barriers and personal goals. Education topics covered pathophysiology of lung disease and comorbidities, medication and its use (bronchodilator, oxygen and noninvasive ventilation), breathing techniques, prevention and recognition of exacerbations, physical exercise, stress management and emotional responses related to the disease. Other topics could be addressed according to participant’s needs: nutrition and weight control, smoking cessation, airway clearance strategies, mindfulness meditation and end-of-life planning. A cycle ergometer (Domyos 120, Decathlon, Villeneuve-d’Ascq, France)and/or a stepper (Go Sport, Grenoble, France) were available at home to perform exercise training during the 8-week PR programme. Cardiorespiratory training was initially performed by 10-min bouts (or sometimes shorter if the participant was unable to exercise for 10 min), at least 5 days per week, trying to achieve 30–45 min of exercise, in one or several sessions, per day. Exercise intensity was adjusted to maintain a Borg dyspnoea score between 3 and 4 on the Borg 0–10 scale. Physical training was completed with upper and lower limb muscle strengthening exercises using dumbbells, elastic and/or body weight on the same daily basis than cardiorespiratory training. Intensity was gradually adjusted (increasing the number of repetitions and/or resistance) according to participant’s dyspnoea or fatigue. They were also encouraged to increase the amount of time spend in daily life physical activities such as gardening, housekeeping and groceries to encourage the integration of physical activities that can be pursued over the long term. Patient and public involvement With the exception of the outcome measures, the participants were involved in the design and implementation of the study. As the home-based PR programme was designed with a person-centric approach, participants were encouraged to select the physical activity, educational and self-management programme that aligned with their individuals needs and capacities. However, the participants were unable to select their group allocation, and no participants were asked to advise on the interpretation or writing up of results. The results of this study will be accessible to participants online via a dedicated website ( www.formactionsante.com ). Data collection Lung function according to standard guidelines, medication and comorbidity data were collected from the individual’s medical record provided by the respiratory specialist. Epices multidimensional questionnaire was used to assess social deprivation on a quantitative and continuous scale ranging from 0 (no deprivation) to 100 (maximum deprivation). A cutoff score of >30.17 suggests social deprivation. Participants were evaluated at home, at the beginning (M0), at the end of the 8-week programme (M2, short term) and at 14 months (M14, long term) after M0 to conclude a full year of follow-up. No home visits or telephone calls were performed between M2 and M14, with the exception of one home visit 6 months after the end of PR in which motivational support was provided. The Hospital Anxiety and Depression (HAD) scale (14 items: seven each for anxiety and depression with minimum and maximum subscores of 0 and 21; lower is better) and the Fatigue Assessment Scale (FAS) (10 items: five reflecting physical fatigue and 5 reflecting mental fatigue with a test score ranging from 10 to 50; lower is better) were assessed. An anxiety or depressive symptoms score >11 indicates a probable clinical diagnosis of anxiety or depression, and a FAS score ≥22 suggests abnormal fatigue. The minimal clinically important difference (MCID) of the HAD and FAS is considered to be a change of 1.5 units and 4 points, respectively. Health-related quality of life was evaluated from January 2010 to December 2016 with the Visual Simplified Respiratory Questionnaire (VSRQ) (8 questions on a scale from 0 to 10 with a total score ranging from 0 to 80; higher is better) and then from January 2017 to December 2021 with the COPD Assessment Test (CAT) (8 items with a total test score ranging from 0 to 40; lower is better). In COPD, the MCID of the VSRQ and CAT is considered to be a change of 3.4 and 2 points, respectively. The mMRC breathlessness scale was also used to evaluate the physical dimension of dyspnoea. The 6-min stepper test (6MST) and the timed-up and go (TUG) test were used to evaluate exercise capacity and functional capacity at home, as previously described. The MCID of the 6MST is considered to be a change of 40 steps in COPD, and a change of 0.9–1.4 s in TUG performance was identified as clinically important. A cutoff of 1 s was selected for the present study. Statistical analyses Statistical analyses were performed using SPSS 29.0 (IBM SPSS Statistics 29.0), and statistical significance threshold was considered at 0.05. Quantitative variables are expressed as mean and SD or median and IQR in case of non-Gaussian distribution and qualitative variables as number and frequency. Normality of distribution was verified graphically and using Kolmogorov–Smirnov tests. Data were normally distributed. Comparisons of the baseline characteristics and assessments between groups were performed using one-way ANOVAs. Linear mixed models with a random intercept to account for the correlation between samples obtained within the same individuals were used to evaluate the changes in study outcomes over time (M2 and M14). Normality of the model residuals was checked for each outcome using graphs of conditional residuals. This was a retrospective study conducted on prospectively collected data, from January 2010 to December 2021. Eligible individuals, aged >18 years were referred to the home-based PR programme ( FormAction Santé , Pérenchies, France) by their pulmonologist who was responsible for documenting the presence of COPD according to the Global Initiative for Chronic Obstructive Lung Disease (GOLD) classification system (inclusion criteria) and validating that the participants were absent of cardiovascular contraindications (unstable and uncontrolled cardiovascular comorbidities despite treatment) to exercise training (exclusion criteria). All participants received an 8-week personalised home PR programme, including a weekly 90-min supervised session as previously described (8 supervised sessions in total). A subgroup of participants received one additional supervised home session per week (8 PR sessions+8 educational sessions), offered by another healthcare company ( Santélys , France), which cares for people with COPD requiring home-based hospitalisation. These eight additional supervised home sessions included education and motivational support for daily physical activities and walking. Therefore, participants were retrospectively divided into two groups: group 1 receiving 8 supervised sessions (once-weekly) and group 2 receiving 16 supervised sessions (twice-weekly). Retrospective exclusion criteria were FEV 1 >80% of predicted value, spirometry missing data or a second PR programme performed less than a year after the first one. Human Research Ethics approval was provided by the observational research protocol evaluation committee of the French Language Society of Pulmonology (CEPRO 2021–054), who approved the retrospective analysis. All participants signed a written informed consent before the start of the programme. The programme started with an evaluation of the patient’s needs and expectations leading to the formulation of a personalised plan (learning needs assessment). Physical training, educational, motivational and self-management plans were designed and implemented through a collaborative process between the PR team, the patient and his/her caregiver. Apart from the weekly visit of the team member who supervised the sessions during the first 8 weeks, participants were expected to perform, on their own, personalised physical training and self-management plan the rest of the week and during the 1-year follow-up period, during which there was no supervised maintenance strategy by the PR team. All the PR healthcare professionals received training in the principles of behaviour change and motivational communication skills which were used to encourage health-promoting behaviour change. Education and self-management interventions were adapted to respond to individual’s needs, barriers and personal goals. Education topics covered pathophysiology of lung disease and comorbidities, medication and its use (bronchodilator, oxygen and noninvasive ventilation), breathing techniques, prevention and recognition of exacerbations, physical exercise, stress management and emotional responses related to the disease. Other topics could be addressed according to participant’s needs: nutrition and weight control, smoking cessation, airway clearance strategies, mindfulness meditation and end-of-life planning. A cycle ergometer (Domyos 120, Decathlon, Villeneuve-d’Ascq, France)and/or a stepper (Go Sport, Grenoble, France) were available at home to perform exercise training during the 8-week PR programme. Cardiorespiratory training was initially performed by 10-min bouts (or sometimes shorter if the participant was unable to exercise for 10 min), at least 5 days per week, trying to achieve 30–45 min of exercise, in one or several sessions, per day. Exercise intensity was adjusted to maintain a Borg dyspnoea score between 3 and 4 on the Borg 0–10 scale. Physical training was completed with upper and lower limb muscle strengthening exercises using dumbbells, elastic and/or body weight on the same daily basis than cardiorespiratory training. Intensity was gradually adjusted (increasing the number of repetitions and/or resistance) according to participant’s dyspnoea or fatigue. They were also encouraged to increase the amount of time spend in daily life physical activities such as gardening, housekeeping and groceries to encourage the integration of physical activities that can be pursued over the long term. With the exception of the outcome measures, the participants were involved in the design and implementation of the study. As the home-based PR programme was designed with a person-centric approach, participants were encouraged to select the physical activity, educational and self-management programme that aligned with their individuals needs and capacities. However, the participants were unable to select their group allocation, and no participants were asked to advise on the interpretation or writing up of results. The results of this study will be accessible to participants online via a dedicated website ( www.formactionsante.com ). Lung function according to standard guidelines, medication and comorbidity data were collected from the individual’s medical record provided by the respiratory specialist. Epices multidimensional questionnaire was used to assess social deprivation on a quantitative and continuous scale ranging from 0 (no deprivation) to 100 (maximum deprivation). A cutoff score of >30.17 suggests social deprivation. Participants were evaluated at home, at the beginning (M0), at the end of the 8-week programme (M2, short term) and at 14 months (M14, long term) after M0 to conclude a full year of follow-up. No home visits or telephone calls were performed between M2 and M14, with the exception of one home visit 6 months after the end of PR in which motivational support was provided. The Hospital Anxiety and Depression (HAD) scale (14 items: seven each for anxiety and depression with minimum and maximum subscores of 0 and 21; lower is better) and the Fatigue Assessment Scale (FAS) (10 items: five reflecting physical fatigue and 5 reflecting mental fatigue with a test score ranging from 10 to 50; lower is better) were assessed. An anxiety or depressive symptoms score >11 indicates a probable clinical diagnosis of anxiety or depression, and a FAS score ≥22 suggests abnormal fatigue. The minimal clinically important difference (MCID) of the HAD and FAS is considered to be a change of 1.5 units and 4 points, respectively. Health-related quality of life was evaluated from January 2010 to December 2016 with the Visual Simplified Respiratory Questionnaire (VSRQ) (8 questions on a scale from 0 to 10 with a total score ranging from 0 to 80; higher is better) and then from January 2017 to December 2021 with the COPD Assessment Test (CAT) (8 items with a total test score ranging from 0 to 40; lower is better). In COPD, the MCID of the VSRQ and CAT is considered to be a change of 3.4 and 2 points, respectively. The mMRC breathlessness scale was also used to evaluate the physical dimension of dyspnoea. The 6-min stepper test (6MST) and the timed-up and go (TUG) test were used to evaluate exercise capacity and functional capacity at home, as previously described. The MCID of the 6MST is considered to be a change of 40 steps in COPD, and a change of 0.9–1.4 s in TUG performance was identified as clinically important. A cutoff of 1 s was selected for the present study. Statistical analyses were performed using SPSS 29.0 (IBM SPSS Statistics 29.0), and statistical significance threshold was considered at 0.05. Quantitative variables are expressed as mean and SD or median and IQR in case of non-Gaussian distribution and qualitative variables as number and frequency. Normality of distribution was verified graphically and using Kolmogorov–Smirnov tests. Data were normally distributed. Comparisons of the baseline characteristics and assessments between groups were performed using one-way ANOVAs. Linear mixed models with a random intercept to account for the correlation between samples obtained within the same individuals were used to evaluate the changes in study outcomes over time (M2 and M14). Normality of the model residuals was checked for each outcome using graphs of conditional residuals. From January 2010 to December 2021, 1255 people with COPD diagnosed by their lung specialist were enrolled in PR programme and 21 (1.7%) people did not enrolled into PR programme after the learning needs assessment visit (lack of motivation, n=5; death, n=3; hospitalisation, n=3; no reason, n=10). Among them, 152 (12.3%) participants were removed from the analysis due to spirometry missing data and 61 (4.9%) because it was not the first PR programme. Among the 1021 participants included in the retrospective analysis (mean age of 65.1±10.0 years and mean FEV 1 of 38.4±17.6% of predicted), 759 (74.3%) participants and 262 (25.7%) participants performed 8 (Gr 1) and 16 home-based sessions (Gr 2), respectively. At baseline, compared with Gr 2, participants in Gr 1 were more often receiving long-term oxygen therapy (69.8% vs 53.0%, p<0.001) and noninvasive ventilation (38.6% vs 29.8%, p=0.015) . All the other baseline characteristics and assessments were similar between groups . The baseline characteristics of the participants including in the sensitivity analysis are presented in supplements . In the analysis of the completers at M0 and M2, only the baseline 6MST test was higher in Gr 1 compared with the Gr 2 (320 vs 292 strokes, p=0.024). A flowchart of the study participants is presented in . Among the 1021 included patients, 72 (7.1%) participants did not complete PR (56 (7.4%) in Gr 1 and 16 (6.1%) in Gr 2), and 680 (66.6%) participants performed the 12-month follow-up visit (491 (64.7%) and 189 (72.1%), respectively, p=0.028). From M0 to M14, 90 (11.9%) participants and 29 (11.1%) participants died in Gr 1 and Gr 2, respectively (p=0.794). In both groups, all the assessments were significantly and clinically improved at the end of PR (p<0.005) . At short term, VSRQ and 6MST improvements were higher in Gr 2 compared with Gr 1 (VSRQ, +9.4 vs +6.9 points, p=0.039 and 6MST, 80 vs 61 strokes, p=0.023) . With an exception for the TUG, all the assessments were also improved at M14 compared with M0 (p<0.005) . These improvements were similar between groups at long term . shows a box plot of the 6MST delta improvements at M2 and M14 in both groups. Diagrams show the large intersubject variability of the 6MST improvement and especially in Gr 1 at the end of PR with outsiders ranging from a decrease of 362 strokes to an increase of 560 strokes. Despite that PR is widely recognised as an essential component of the integrated care of people with COPD, the optimal number of physical training and educational sessions remains a topic of ongoing debate. This real-life retrospective cohort study demonstrated that whether participants performed 8 supervised PR sessions (Gr 1) or 8 supervised PR sessions+8 supervised educational sessions (Gr 2) during 8 weeks, they significantly improved health-related quality of life, dyspnoea, anxiety and depressive symptoms, fatigue and exercise tolerance at both short and long terms. At the end of PR, Gr 2 showed a greater improvement in health-related quality of life (VSRQ only) and exercise tolerance than Gr 1. However, 1 year after the end of PR, the benefits were similar between groups. These results suggest that a PR programme of once-weekly home-based supervised sessions during 8 weeks, combined with unsupervised home physical training sessions and self-management plan for the other health behaviours, might be the best compromise between patients, health professionals and policy makers. At the end of the 2-month supervised intervention, both groups significantly improved health-related quality of life, fatigue symptom, anxiety and depressive symptoms, dyspnoea, exercise tolerance and functional capacity. While all the improvements also reached the clinical threshold in Gr 2, Gr 1 did not reach the MCID of the fatigue and anxiety symptom scores. Moreover, the benefits on health-related quality of life (VSRQ) and exercise tolerance were significantly greater in the group that received more supervised sessions. As shown in , we chose to keep the outsiders in the 6MST analyses, as they reflect the variations in physical capacity that people with COPD experience over the course of the disease progression. For example, the outsiders who showed a 6MST decrease >300 strokes after PR all reported a COPD-related recent exacerbation and/or hospitalisation. Therefore, the large intrasubject variability and the higher number of outsiders in Gr 1 compared with Gr 2 may have influenced the results. As the majority of participants did not fill a physical training diary, we cannot rule out that the number of unsupervised physical training sessions could have also contribute to the 6MST short-term difference between group. The short-term results are consistent with the conclusion of the only review on this topic, that longer duration PR programmes (20–72 weeks), therefore those offering more supervised sessions (54–216 sessions), may have more favourable effects on health-related quality of life in people with COPD, with the exception that the participants of the present study benefited from a maximum of 16 supervised sessions. Recent RCTs have validated that exercise tolerance, health-related quality of life and dyspnoea can be improved by 8–14 remote supervised home sessions (weekly phone calls) in people with COPD but with divergent results regarding long-term maintenance. Moreover, the review of Beauchamp et al included five RCTs (published from 1990 to 2006) in which only two studies offered education combined with exercise training. One of this trial showed that health-related quality of life and exercise capacity were similarly improved by both the short and long interventions (4 weeks, 8 sessions vs 7 weeks, 14 sessions). Over the past 20 years, PR programmes content have evolved, giving a more substantial role to education and psychosocial support through motivational communication and self-management plan to encourage patients to adapt healthier behaviours. The most important result of this real-life study is that, with the exception of the timed-up and go test, all outcomes were significantly improved 12 months after the end of the home-based PR programme, without a difference between the number of sessions performed. No maintenance strategy (such as phone calls, telerehabilitation or home visits) was delivered by the PR team after the end of PR. However, the design of the home-based PR was patient-centred; it started with an evaluation of the patient’s needs, health beliefs and expectations leading to the formulation of personalised objectives. Physical training, education, motivational and self-management plans were implemented using appropriate strategies and readjusted during the 8 sessions to achieve patient’s objectives. A collaborative process between the PR team, the person with COPD and his/her caregiver (if present) was implemented throughout the 8-week programme to negotiate the self-maintenance of exercise training/daily life physical activity and positive health behaviours. This design may explain the similar significant benefits between groups 1 year after the end of PR. When designing a PR programme, a balanced compromise must be reached between (1) the needs, availability, capacity and commitment of the patients; (2) healthcare professionals to provide optimal long-term benefits for a majority of participants and (3) at a reasonable cost for the funders and policy makers. To reach such a compromise, flexibility and diversity among PR settings should be offered to patients with respiratory chronic diseases. In light of the guidelines’ recommendation of an 8-week programme duration, future trials should focus on providing robust evidence on the optimal number of sessions during this duration for people with COPD. It would be beneficial to consider the implementation of tailored programmes, in which the conventional number of PR session can be extended or shortened as required to align with the evolving needs of the individual as well as temporal constraints. This would be particularly relevant to patients who are referred to PR in preparation for thoracic surgery or for lung transplant or those entering an aggressive treatment for bronchopulmonary cancer. Methodological considerations One of the strengths of the study was to include more than a thousand of people with severe COPD over a decade of real-life home-based PR practice. By improving external validity and establishment in usual care, real-life studies are useful to complement the results of traditional randomised controlled trial. Nevertheless, the monocentric, non-randomised and uncontrolled nature of this study may limit the scope of the present results that should be confirmed by more robustly designed trials. Although the group allocation was not randomly assigned, it was not selected by either the patient or the PR team. Participants received 16 supervised sessions if, in addition to PR ( FormAction Santé ), they were also requiring a home-based hospitalisation ( Santélys ), which limits the selection bias for this study. Despite that education is an essential component of PR and that both healthcare teams received the same initial standardised therapeutic education from the same licensed instructor, we recognised that 8 educational sessions are not equivalent of 8 PR sessions. Therefore, our study is not comparable with others trials that have evaluated the effectiveness of short versus long PR programmes. Since using a diary was optional, the adherence to physical training session cannot be report and we cannot rule out that the higher improvement of exercise tolerance in Gr 2 at short term is related to a higher amount of physical exercise training during PR. Tracking unsupervised physical training is a challenge for real-life home-based studies that has yet to be addressed. Finally, given the real-life study design, health-related quality of life was evaluated using two questionnaires over the 10-year period of PR practice. As the comparisons were intragroup, it can be presumed that it did not impact the study results. Conclusion This real-life study showed that people with COPD benefit from both 8 or 16 supervised home-based PR sessions. Participants who received more supervised sessions showed a significant but not clinically greater short-term improvement in health-related quality of life and exercise tolerance. However, benefits were similar between groups 1 year after the end of PR. Once-weekly home-based supervised sessions during 8 weeks, combined with unsupervised physical training sessions and self-management plan for the other health behaviours, might be the best compromise between patients, health professionals and policy makers. One of the strengths of the study was to include more than a thousand of people with severe COPD over a decade of real-life home-based PR practice. By improving external validity and establishment in usual care, real-life studies are useful to complement the results of traditional randomised controlled trial. Nevertheless, the monocentric, non-randomised and uncontrolled nature of this study may limit the scope of the present results that should be confirmed by more robustly designed trials. Although the group allocation was not randomly assigned, it was not selected by either the patient or the PR team. Participants received 16 supervised sessions if, in addition to PR ( FormAction Santé ), they were also requiring a home-based hospitalisation ( Santélys ), which limits the selection bias for this study. Despite that education is an essential component of PR and that both healthcare teams received the same initial standardised therapeutic education from the same licensed instructor, we recognised that 8 educational sessions are not equivalent of 8 PR sessions. Therefore, our study is not comparable with others trials that have evaluated the effectiveness of short versus long PR programmes. Since using a diary was optional, the adherence to physical training session cannot be report and we cannot rule out that the higher improvement of exercise tolerance in Gr 2 at short term is related to a higher amount of physical exercise training during PR. Tracking unsupervised physical training is a challenge for real-life home-based studies that has yet to be addressed. Finally, given the real-life study design, health-related quality of life was evaluated using two questionnaires over the 10-year period of PR practice. As the comparisons were intragroup, it can be presumed that it did not impact the study results. This real-life study showed that people with COPD benefit from both 8 or 16 supervised home-based PR sessions. Participants who received more supervised sessions showed a significant but not clinically greater short-term improvement in health-related quality of life and exercise tolerance. However, benefits were similar between groups 1 year after the end of PR. Once-weekly home-based supervised sessions during 8 weeks, combined with unsupervised physical training sessions and self-management plan for the other health behaviours, might be the best compromise between patients, health professionals and policy makers. 10.1136/bmjopen-2024-092096 online supplemental file 1
Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study
c961e60b-b154-4fb5-b3ac-f6881878c57e
11522659
Health Literacy[mh]
In recent years, federal regulations such as the Cures Act and the 2022 Office of the National Coordinator for Health Information’s Cures Act Final Rule have mandated that health care providers grant patients full access to their electronic health records (EHRs) through patient portals . Patient access to their EHRs represents a new communication channel between doctors and patients that can facilitate faster communication of test results, medication lists, and other information . These regulations include the EHR notes as part of the data that must be provided . Access to EHRs can enhance patients’ understanding of their disease , improve communication between patients and their care providers , improve medication adherence , and reduce health care costs . However, EHRs contain medical jargon that patients, especially those with low health literacy , may not understand . The following example is typical of the text that patients see in their EHR notes: “From a GI standpoint, we recommend to proceed with bariatric surgery. However, he will need to continue daily PPI administration to maximize acid reduction.” Without significant medical knowledge, it is challenging to understand the meanings of jargon terms such as “GI” (gastrointestinal), “bariatric,” and “PPI” (proton pump inhibitor). Innovations are needed to support patients’ use of EHR notes and translate them into language that is easier for them to understand . We found in our previous work that defining jargon and the readability of those definitions were positively associated with improved EHR note comprehension . Jargon in EHR Notes According to the Cambridge Dictionary, jargon is defined as “language used by a particular group of people, especially in their work, and which most other people do not understand” . Technical jargon occurs across disciplines and reflects the amount of specialist knowledge in a field . Jargon can aid in communication by succinctly describing complex concepts. However, it can also impede communication with and comprehension by those unfamiliar with the language . For accurate comprehension, a reader must be familiar with 95%-98% of the words in a text . Medical records contain large amounts of jargon and abbreviations, and recent work has shown that patients cannot consistently comprehend the meaning of jargon terms, including abbreviations and acronyms . Recent studies involving over 10,000 patients showed that allowing patients to read the clinical notes in their medical records confused some, especially those in vulnerable groups such as patients with lower literacy or lower income . Similar work on patient portals shows that patients struggle to make use of their access . There has been work in the literature on identifying and defining jargon in various fields, from medical informatics to scientific communication . A significant gap in the previous work is that jargon identification is examined from an expert point of view. In medicine, consumer health vocabularies bridge the gap between patient and physician terminology. Consumer health vocabularies are typically collected by analyzing user search queries , web-based community forums , and patient-provider communications . However, there is little work analyzing what patients recognize as jargon in the context of EHR notes. Furthermore, the literature has not examined if what patients recognize as jargon differs with demographic groups. Recently, Pitt and Hendrickson proposed a classification system for 7 types of medical jargon: technical terminology, alphabet soup, medical vernacular, medicalized English, unnecessary synonyms, euphemisms, and judgmental jargon. This categorization is broad enough to cover domain-specific technical terms (eg, “myocardial infarction” or “ambulating”) as well as more common words with specific medical meanings (eg, “progressing” or “positive”) . NoteAid To assist patients with understanding medical notes, we are developing NoteAid , a natural language processing system that links medical jargon in EHR notes to definitions and addresses context-dependent meanings (eg, the abbreviation MAP could refer to muscle action potential or mean arterial pressure). A team of interdisciplinary experts identifies jargon from a corpus of EHRs and writes definitions for them. Specifically, NoteAid definitions of identified terms are written for a fourth- to seventh-grade reading level . For example, in the EHR sentence referenced above, NoteAid would provide the following definition: bariatric surgery: Surgery on the stomach and intestines for weight loss. We have written definitions for approximately 30,000 jargon terms. Our operational use of the term “medical jargon” in this study is patient-focused, not clinician-focused. NoteAid uses a 2-stage process for identifying jargon. First, the software generates a word frequency list from the EHR corpus. Starting from the most frequent word on the list, it presents sentences that contain the word to the definition writer. The definition writer reviews the sentence and decides which terms are jargon, following a set of guidelines in making this decision (more details in ). Examples of the guidelines include a medical term that would not be recognized by a layperson with a fourth-grade education (eg, duodenum); a word that has a different meaning in the medical context than in the lay context (eg, accommodate: when the eye changes focus from far to near). Still, the determination of what is and is not jargon is uncertain, as patients differ widely in their education, general level of health literacy, and experience with medical conditions. In this observational study, we examined how well the NoteAid definition writers agreed with each other in identifying jargon and how well they agreed with laypeople. We used Amazon Mechanical Turk (MTurk) workers as proxies for laypeople. We also investigated how jargon identification varied across different demographic subgroups. We expected that demographic subgroups typically associated with lower health literacy would identify more terms as jargon than other subgroups. These groups include older adults, certain race or ethnic groups, those with a high school education or less, those whose native language is not English, and those who score low on a health literacy screener . According to the Cambridge Dictionary, jargon is defined as “language used by a particular group of people, especially in their work, and which most other people do not understand” . Technical jargon occurs across disciplines and reflects the amount of specialist knowledge in a field . Jargon can aid in communication by succinctly describing complex concepts. However, it can also impede communication with and comprehension by those unfamiliar with the language . For accurate comprehension, a reader must be familiar with 95%-98% of the words in a text . Medical records contain large amounts of jargon and abbreviations, and recent work has shown that patients cannot consistently comprehend the meaning of jargon terms, including abbreviations and acronyms . Recent studies involving over 10,000 patients showed that allowing patients to read the clinical notes in their medical records confused some, especially those in vulnerable groups such as patients with lower literacy or lower income . Similar work on patient portals shows that patients struggle to make use of their access . There has been work in the literature on identifying and defining jargon in various fields, from medical informatics to scientific communication . A significant gap in the previous work is that jargon identification is examined from an expert point of view. In medicine, consumer health vocabularies bridge the gap between patient and physician terminology. Consumer health vocabularies are typically collected by analyzing user search queries , web-based community forums , and patient-provider communications . However, there is little work analyzing what patients recognize as jargon in the context of EHR notes. Furthermore, the literature has not examined if what patients recognize as jargon differs with demographic groups. Recently, Pitt and Hendrickson proposed a classification system for 7 types of medical jargon: technical terminology, alphabet soup, medical vernacular, medicalized English, unnecessary synonyms, euphemisms, and judgmental jargon. This categorization is broad enough to cover domain-specific technical terms (eg, “myocardial infarction” or “ambulating”) as well as more common words with specific medical meanings (eg, “progressing” or “positive”) . To assist patients with understanding medical notes, we are developing NoteAid , a natural language processing system that links medical jargon in EHR notes to definitions and addresses context-dependent meanings (eg, the abbreviation MAP could refer to muscle action potential or mean arterial pressure). A team of interdisciplinary experts identifies jargon from a corpus of EHRs and writes definitions for them. Specifically, NoteAid definitions of identified terms are written for a fourth- to seventh-grade reading level . For example, in the EHR sentence referenced above, NoteAid would provide the following definition: bariatric surgery: Surgery on the stomach and intestines for weight loss. We have written definitions for approximately 30,000 jargon terms. Our operational use of the term “medical jargon” in this study is patient-focused, not clinician-focused. NoteAid uses a 2-stage process for identifying jargon. First, the software generates a word frequency list from the EHR corpus. Starting from the most frequent word on the list, it presents sentences that contain the word to the definition writer. The definition writer reviews the sentence and decides which terms are jargon, following a set of guidelines in making this decision (more details in ). Examples of the guidelines include a medical term that would not be recognized by a layperson with a fourth-grade education (eg, duodenum); a word that has a different meaning in the medical context than in the lay context (eg, accommodate: when the eye changes focus from far to near). Still, the determination of what is and is not jargon is uncertain, as patients differ widely in their education, general level of health literacy, and experience with medical conditions. In this observational study, we examined how well the NoteAid definition writers agreed with each other in identifying jargon and how well they agreed with laypeople. We used Amazon Mechanical Turk (MTurk) workers as proxies for laypeople. We also investigated how jargon identification varied across different demographic subgroups. We expected that demographic subgroups typically associated with lower health literacy would identify more terms as jargon than other subgroups. These groups include older adults, certain race or ethnic groups, those with a high school education or less, those whose native language is not English, and those who score low on a health literacy screener . We conducted an observational study to examine the agreement between the NoteAid definition writers and laypeople on what is considered medical jargon. Data Source The NoteAid dictionary used medical notes from the PittEHR database of deidentified inpatient medical records . The records consist of emergency department notes, progress notes, consult notes, operative reports, radiology reports, pathology reports, and discharge summaries, all written by physicians. We randomly selected 20 sentences from the database. Sentences that contained only administrative data, contained fewer than 10 words, or were substantially similar to another selected sentence were not included. The NoteAid definition writers had not previously seen these sentences. Identifying Terms for Annotation Task The 20 sentences contained a total of 904 words. So as not to inflate the calculated agreement, we excluded from the analysis common words, which we defined as all conjunctions, pronouns, prepositions, numerals, articles, contractions, names of months, punctuation, and the 25 most common nouns, verbs, adjectives, and adverbs, including their plural forms. Terms that were repeated in a sentence were only counted once. Multiword terms were analyzed as single terms to avoid double counting. We considered multiword terms to be adjacent words that represented a distinct medical entity (examples include PR interval, internal capsule, and acute intermittent porphyria), terms that were routinely used together (examples include hemodynamically stable, status post, and past medical history), or terms that were modified by a minor word (examples include trace perihepatic fluid, mild mitral regurgitation, rare positive cells, and deep pelvis). The grouping of multiword terms was determined by 2 members of the research team after reaching a consensus. There were 325 potential jargon terms in the final analysis. We performed a second analysis in which only the common words were excluded, and there was no grouping of multiword terms. This process resulted in 549 potential jargon words in the analysis. Data Collection Data collection took place between August 2020 and April 2021. NoteAid definition writers and MTurk workers were shown the 20 sentences. MTurk workers were asked to identify those terms for which they did not know the definition. In this paper, we refer to these identified terms as “jargon.” The NoteAid definition writers were asked to identify terms that they considered to be jargon, that is, terms for which laypeople would not know the definition. MTurk workers were instructed not to consult any sources of information during the task. Interspersed among the 20 sentences were 3 attention-check questions to test whether the participant was paying attention. If a participant answered 2 of the 3 attention checks incorrectly, the participant’s responses were discarded, and the participant was replaced (however, the participant was not excluded from reentering the study). Participant Recruitment We recruited adult MTurk workers and collected demographic information about the workers’ age, sex, race or ethnicity, education, native language, and health literacy. We performed subgroup analyses based on MTurk worker characteristics. To evaluate health literacy, MTurk workers were screened with the Single Item Literacy Screener . MTurk workers who worked in the health care field were excluded from the study. We also excluded MTurk workers with a previous approval rating below 95% in the MTurk platform. MTurk workers were oversampled to obtain equal numbers of MTurk workers in each of the education subgroups and an equal number of non-White and White participants. We sampled 270 MTurk workers and 6 definition writers to complete the study instrument. The 6 definition writers were all experienced biomedical annotators with advanced degrees in medicine, nursing, biostatistics, and biomedical research. Evaluation Our evaluation metrics were the proportion of terms rated as jargon, sensitivity, specificity, and Fleiss κ for agreement among NoteAid definition writers and among MTurk workers. Wald CIs were calculated at 95%. We analyzed NoteAid definition writers individually and as a group. Sensitivity and specificity measured the NoteAid definition writers’ ability to correctly discriminate between jargon and nonjargon, using the MTurk workers’ responses as the gold standard. Since all 270 MTurk workers did not agree on which terms were jargon, the cutoff number of MTurk workers for defining a term as jargon was chosen using the Youden index . Youden index calculates sensitivities and specificities for all possible thresholds (ie, between jargon and nonjargon). The cutoff where the sum of sensitivity and specificity was highest was selected. To determine whether the definition writers’ jargon selection was systematically different from the MTurk workers, we calculated the Kendall rank correlation statistic between the definition writers and MTurk workers . We also analyzed the results by MTurk worker characteristics to determine if specific subpopulations differed from the definition writers in the terms they identified as jargon. We fit a beta regression model with a logit link to examine the association between MTurk worker characteristics and whether a term was classified as jargon. We used beta regression because it does not assume that individual terms have the same probability of being rated as jargon (eg, joint vs articulation). Here, the proportion of terms an MTurk worker identified as jargon served as the dependent variable, and the MTurk worker characteristics (sex, age group, race or ethnicity, education level, native language, and health literacy score) served as the independent variables. We assumed a linear relationship between the predictor and dependent variables, which we confirmed by checking the residuals. To check for the possibility of interactions among the independent variables, we explored adding different combinations of 2-way interaction terms. Models were evaluated using pseudo- R 2 and the Akaike information criterion (AIC) values . Ethical Considerations We conducted this study with approval from the institutional review board at the University of Massachusetts Lowell (H00010621). Informed consent was obtained from MTurk workers, and they had the option to leave the task at any time. MTurk worker data were anonymized, and EHR data was deidentified. MTurk workers were paid US $3 for the task, which took an average of 20 minutes to complete. The NoteAid dictionary used medical notes from the PittEHR database of deidentified inpatient medical records . The records consist of emergency department notes, progress notes, consult notes, operative reports, radiology reports, pathology reports, and discharge summaries, all written by physicians. We randomly selected 20 sentences from the database. Sentences that contained only administrative data, contained fewer than 10 words, or were substantially similar to another selected sentence were not included. The NoteAid definition writers had not previously seen these sentences. The 20 sentences contained a total of 904 words. So as not to inflate the calculated agreement, we excluded from the analysis common words, which we defined as all conjunctions, pronouns, prepositions, numerals, articles, contractions, names of months, punctuation, and the 25 most common nouns, verbs, adjectives, and adverbs, including their plural forms. Terms that were repeated in a sentence were only counted once. Multiword terms were analyzed as single terms to avoid double counting. We considered multiword terms to be adjacent words that represented a distinct medical entity (examples include PR interval, internal capsule, and acute intermittent porphyria), terms that were routinely used together (examples include hemodynamically stable, status post, and past medical history), or terms that were modified by a minor word (examples include trace perihepatic fluid, mild mitral regurgitation, rare positive cells, and deep pelvis). The grouping of multiword terms was determined by 2 members of the research team after reaching a consensus. There were 325 potential jargon terms in the final analysis. We performed a second analysis in which only the common words were excluded, and there was no grouping of multiword terms. This process resulted in 549 potential jargon words in the analysis. Data collection took place between August 2020 and April 2021. NoteAid definition writers and MTurk workers were shown the 20 sentences. MTurk workers were asked to identify those terms for which they did not know the definition. In this paper, we refer to these identified terms as “jargon.” The NoteAid definition writers were asked to identify terms that they considered to be jargon, that is, terms for which laypeople would not know the definition. MTurk workers were instructed not to consult any sources of information during the task. Interspersed among the 20 sentences were 3 attention-check questions to test whether the participant was paying attention. If a participant answered 2 of the 3 attention checks incorrectly, the participant’s responses were discarded, and the participant was replaced (however, the participant was not excluded from reentering the study). We recruited adult MTurk workers and collected demographic information about the workers’ age, sex, race or ethnicity, education, native language, and health literacy. We performed subgroup analyses based on MTurk worker characteristics. To evaluate health literacy, MTurk workers were screened with the Single Item Literacy Screener . MTurk workers who worked in the health care field were excluded from the study. We also excluded MTurk workers with a previous approval rating below 95% in the MTurk platform. MTurk workers were oversampled to obtain equal numbers of MTurk workers in each of the education subgroups and an equal number of non-White and White participants. We sampled 270 MTurk workers and 6 definition writers to complete the study instrument. The 6 definition writers were all experienced biomedical annotators with advanced degrees in medicine, nursing, biostatistics, and biomedical research. Our evaluation metrics were the proportion of terms rated as jargon, sensitivity, specificity, and Fleiss κ for agreement among NoteAid definition writers and among MTurk workers. Wald CIs were calculated at 95%. We analyzed NoteAid definition writers individually and as a group. Sensitivity and specificity measured the NoteAid definition writers’ ability to correctly discriminate between jargon and nonjargon, using the MTurk workers’ responses as the gold standard. Since all 270 MTurk workers did not agree on which terms were jargon, the cutoff number of MTurk workers for defining a term as jargon was chosen using the Youden index . Youden index calculates sensitivities and specificities for all possible thresholds (ie, between jargon and nonjargon). The cutoff where the sum of sensitivity and specificity was highest was selected. To determine whether the definition writers’ jargon selection was systematically different from the MTurk workers, we calculated the Kendall rank correlation statistic between the definition writers and MTurk workers . We also analyzed the results by MTurk worker characteristics to determine if specific subpopulations differed from the definition writers in the terms they identified as jargon. We fit a beta regression model with a logit link to examine the association between MTurk worker characteristics and whether a term was classified as jargon. We used beta regression because it does not assume that individual terms have the same probability of being rated as jargon (eg, joint vs articulation). Here, the proportion of terms an MTurk worker identified as jargon served as the dependent variable, and the MTurk worker characteristics (sex, age group, race or ethnicity, education level, native language, and health literacy score) served as the independent variables. We assumed a linear relationship between the predictor and dependent variables, which we confirmed by checking the residuals. To check for the possibility of interactions among the independent variables, we explored adding different combinations of 2-way interaction terms. Models were evaluated using pseudo- R 2 and the Akaike information criterion (AIC) values . We conducted this study with approval from the institutional review board at the University of Massachusetts Lowell (H00010621). Informed consent was obtained from MTurk workers, and they had the option to leave the task at any time. MTurk worker data were anonymized, and EHR data was deidentified. MTurk workers were paid US $3 for the task, which took an average of 20 minutes to complete. shows the characteristics of the 270 MTurk workers. The average proportion of terms identified as jargon by the MTurk workers overall was 25.6% (95% CI 25%-26.2%, ). This proportion compares to 59% for the NoteAid definition writers (95% CI 56.1%-61.8%). Among MTurk worker subgroups, the average proportion of terms identified as jargon ranged from 17.7% to 30.9% . Participants with the lowest health literacy score (5) identified fewer terms as jargon ( P =.15, n=2), while participants with the second-lowest health literacy score (4) identified more terms as jargon ( P =.03, n=10). Both sample sizes were small. Model The beta regression model had a pseudo- R 2 of 0.071 , indicating that the amount of variability in the proportion of terms identified as jargon explained by the MTurk worker characteristics was very low. The only significant MTurk worker characteristic was being a nonnative English speaker ( P =.02; see ). Compared with native English speakers, nonnative English speakers had a 0.535 odds ratio (95% CI 0.321-0.892) of identifying terms as jargon, controlling for sex, age, race or ethnicity, education level, and health literacy score. However, there were only 4 nonnative English speakers in the sample. Of the various 2-way interactions examined, only the addition of an interaction between race or ethnicity and education yielded a good model fit, and the interaction term was not statistically significant. The addition of this interaction slightly increased the proportion of variability explained (pseudo- R 2 =0.089) without meaningfully changing the AIC. Agreement Among MTurk Workers and Among Definition Writers The proportion of terms identified as jargon by NoteAid definition writers ranged from 48.3% to 64.9%. The agreement among NoteAid definition writers was good (Fleiss κ=0.781, 95% CI 0.753-0.809, ), with all agreeing on the categorization (jargon or not jargon) for 74.5% of terms. The proportion of terms identified as jargon by individual MTurk workers ranged from 1.2% to 57.5%. Agreement among MTurk workers was fair (Fleiss κ=0.590, 95% CI 0.589-0.591, ). For 61.9% of terms, at least 90% of MTurk workers agreed on the categorization (jargon or not jargon). Agreement Between Definition Writers and MTurk Workers Our main measures of agreement between definition writers and MTurk workers were sensitivity and specificity. Using the Youden index, the highest combined sensitivity and specificity corresponded to at least 3 out of the 270 MTurk workers identifying a term as jargon. Using this cutoff, the mean sensitivity for the NoteAid definition writers was 91.7% (95% CI 90.1-93.3%), and the mean specificity was 88.2% (95% CI 86-90.5%; ). These correspond to a false negative rate of 8.3% and a false positive rate of 11.8%, respectively. Among the individual NoteAid definition writers, sensitivity ranged from 78.1% to 95.8%, and specificity ranged from 79.7% to 94.7%. Using the same threshold of 3 MTurk workers for classifying a term as jargon, we found that 59.1% of the terms would be classified as jargon, which is remarkably close to the average of 59% identified by definition writers. The Kendall rank order correlation statistic was consistently high across all the MTurk worker characteristics, indicating no systematic differences in jargon identification between definition writers and different subpopulations of MTurk workers . All of the above analyses were repeated using single-word terms rather than multiword terms as the unit of analysis, and the results were not substantively different. The beta regression model had a pseudo- R 2 of 0.071 , indicating that the amount of variability in the proportion of terms identified as jargon explained by the MTurk worker characteristics was very low. The only significant MTurk worker characteristic was being a nonnative English speaker ( P =.02; see ). Compared with native English speakers, nonnative English speakers had a 0.535 odds ratio (95% CI 0.321-0.892) of identifying terms as jargon, controlling for sex, age, race or ethnicity, education level, and health literacy score. However, there were only 4 nonnative English speakers in the sample. Of the various 2-way interactions examined, only the addition of an interaction between race or ethnicity and education yielded a good model fit, and the interaction term was not statistically significant. The addition of this interaction slightly increased the proportion of variability explained (pseudo- R 2 =0.089) without meaningfully changing the AIC. The proportion of terms identified as jargon by NoteAid definition writers ranged from 48.3% to 64.9%. The agreement among NoteAid definition writers was good (Fleiss κ=0.781, 95% CI 0.753-0.809, ), with all agreeing on the categorization (jargon or not jargon) for 74.5% of terms. The proportion of terms identified as jargon by individual MTurk workers ranged from 1.2% to 57.5%. Agreement among MTurk workers was fair (Fleiss κ=0.590, 95% CI 0.589-0.591, ). For 61.9% of terms, at least 90% of MTurk workers agreed on the categorization (jargon or not jargon). Our main measures of agreement between definition writers and MTurk workers were sensitivity and specificity. Using the Youden index, the highest combined sensitivity and specificity corresponded to at least 3 out of the 270 MTurk workers identifying a term as jargon. Using this cutoff, the mean sensitivity for the NoteAid definition writers was 91.7% (95% CI 90.1-93.3%), and the mean specificity was 88.2% (95% CI 86-90.5%; ). These correspond to a false negative rate of 8.3% and a false positive rate of 11.8%, respectively. Among the individual NoteAid definition writers, sensitivity ranged from 78.1% to 95.8%, and specificity ranged from 79.7% to 94.7%. Using the same threshold of 3 MTurk workers for classifying a term as jargon, we found that 59.1% of the terms would be classified as jargon, which is remarkably close to the average of 59% identified by definition writers. The Kendall rank order correlation statistic was consistently high across all the MTurk worker characteristics, indicating no systematic differences in jargon identification between definition writers and different subpopulations of MTurk workers . All of the above analyses were repeated using single-word terms rather than multiword terms as the unit of analysis, and the results were not substantively different. Principal Findings Jargon identification depends on the target audience. Patients differ widely in their education, general level of health literacy, and experience with medical conditions . What should or should not be considered jargon is not often clear, as evidenced by recent attempts to formalize the notion in a classification system . Using sensitivity and specificity as measures of agreement, we found good agreement between definition writers and MTurk workers. The high level of sensitivity indicates that definition writers were providing definitions for the terms that laypeople identify as jargon. Correspondingly, the high specificity indicates that the definition writers would not be expending time writing definitions for terms that laypeople do not identify as jargon. These findings validate one goal of the NoteAid project, which is to assist patients in understanding their EHR notes, even if they have limited health literacy. The calculation of sensitivity and specificity for the NoteAid definition writers required a gold standard as to which terms are and are not jargon. However, since all 270 MTurk workers did not agree on which terms were jargon, we used the Youden index to determine the cutoff number of MTurk workers for defining a word as jargon. In this method, sensitivities and specificities were calculated for all possible cutoffs, and the cutoff whose summed sensitivity and specificity were highest was used. This technique gave the best balance between sensitivity and specificity, though it treats the cost of false positives and false negatives as the same. On average, we found that MTurk workers identified 25.6% (22,480/87,750) of terms as jargon, compared with 59% (1150/1950) of terms identified as jargon by the definition writers. However, this is not necessarily undesirable. The definition writers were identifying jargon terms for inclusion in the NoteAid dictionary; broad coverage in terms of inclusion is preferable in this context. Further, MTurk workers differed considerably as to which terms they considered jargon (Fleiss κ=0.590), so simply matching their average proportion of terms identified as jargon would exclude terms that some laypersons consider to be jargon. Therefore, we also evaluated definition writers on their sensitivity and specificity in identifying MTurk workers’ jargon terms and found high agreement. These results suggest that personalized technologies such as NoteAid are needed, where specific results are identified from a wider database; a general consensus on what is or is not jargon may lead to the exclusion of terms that require definitions for a subset of the population. Also, based on the MTurk worker health literacy scores, the average MTurk worker’s reading level was likely higher than fourth grade, the target level for NoteAid, which is consistent with previous work . Therefore, the 25.6% jargon proportion among MTurk workers likely underestimates the prevalence of jargon terms for NoteAid’s target population. Our beta regression model did not find differences between demographic subgroups in the proportion of terms identified as jargon. The only group that was significantly less likely to identify a term as jargon was the group for which English was not their native language. However, the small sample for this subgroup makes interpretation difficult. For the subgroups of adequate sample size, these results suggest that the selection of jargon by the NoteAid definition writers is sufficient. In this work, MTurk workers and definition writers selected jargon terms from actual deidentified EHR notes. Most existing work looks at identifying jargon more generally, using data from web search logs or web forums. By using EHR notes for our task, the jargon identified should be more relevant for the downstream task of presenting definitions to patients looking at their own EHR notes. Limitations and Future Work There are several limitations to this work that can inform future research. First, we examined a relatively small number of passages in this experiment. A different selection of passages could have produced different jargon identifications among the MTurk workers and definition writers. A second limitation is that we did not examine context dependency in this study. For example, the term “tips” can be either nonjargon (suggestions) or jargon (transjugular intrahepatic portosystemic shunt). The NoteAid system considers context-dependent meanings, and future studies could address this. Another limitation concerns our lay population. While MTurk is often used for crowdsourced data collection , the demographic characteristics of the collected sample are not representative of the broader US population . In particular, we are interested in jargon identification behaviors for individuals with low health literacy, while the MTurk workers in our study generally had high health literacy scores. Of note is that the proportion of terms identified as jargon by MTurk workers in the lowest health literacy groups diverged from that of the higher health literacy groups. However, these sample sizes were very small, making interpretation difficult. Future work should attempt to replicate the results in actual patients, as in our previous work . Using our demographic and other variables, the beta regression model only explained a small proportion of the variability in jargon identification among the MTurk workers. It is possible that there are unmeasured variables that would account for additional variability, such as income, occupation, personal experience with health issues, or interest in health topics. The NoteAid definition-writing process is a distributed task. Each definition writer works on separate notes to identify and define jargon terms. Therefore, a consistent understanding of what is or is not jargon is important to ensure consistent coverage across the notes. Future development of NoteAid can investigate automatic jargon identification for definition writers through natural language processing tools, using the corpus of human-identified jargon as training data may lead to a more effective automated system if those data are consistent. In particular, with the growing impact of large language models , there is an opportunity to leverage large language models to improve patient understanding of notes. Future work can also use user information such as health literacy level and demographic information to identify the most relevant jargon terms and definitions, making the system even more personalized. Lastly, updating annotator instructions to be in line with established jargon classification frameworks can enforce consistency in labeling . There are other applications for jargon identification, such as clinical trial regulations requiring plain language summaries . This work can also inform jargon identification in other fields, such as law. A NoteAid-like tool for jargon identification and definition could define technical legal terms for lay individuals as they encounter them on the Web (eg, when reading a contract or terms of service agreement). Conclusion In this work, we have shown that trained definition writers could consistently select jargon terms for which laypeople need definitions. These results are encouraging for the continued development of NoteAid, and they have implications for other fields. Jargon identification depends on the target audience. Patients differ widely in their education, general level of health literacy, and experience with medical conditions . What should or should not be considered jargon is not often clear, as evidenced by recent attempts to formalize the notion in a classification system . Using sensitivity and specificity as measures of agreement, we found good agreement between definition writers and MTurk workers. The high level of sensitivity indicates that definition writers were providing definitions for the terms that laypeople identify as jargon. Correspondingly, the high specificity indicates that the definition writers would not be expending time writing definitions for terms that laypeople do not identify as jargon. These findings validate one goal of the NoteAid project, which is to assist patients in understanding their EHR notes, even if they have limited health literacy. The calculation of sensitivity and specificity for the NoteAid definition writers required a gold standard as to which terms are and are not jargon. However, since all 270 MTurk workers did not agree on which terms were jargon, we used the Youden index to determine the cutoff number of MTurk workers for defining a word as jargon. In this method, sensitivities and specificities were calculated for all possible cutoffs, and the cutoff whose summed sensitivity and specificity were highest was used. This technique gave the best balance between sensitivity and specificity, though it treats the cost of false positives and false negatives as the same. On average, we found that MTurk workers identified 25.6% (22,480/87,750) of terms as jargon, compared with 59% (1150/1950) of terms identified as jargon by the definition writers. However, this is not necessarily undesirable. The definition writers were identifying jargon terms for inclusion in the NoteAid dictionary; broad coverage in terms of inclusion is preferable in this context. Further, MTurk workers differed considerably as to which terms they considered jargon (Fleiss κ=0.590), so simply matching their average proportion of terms identified as jargon would exclude terms that some laypersons consider to be jargon. Therefore, we also evaluated definition writers on their sensitivity and specificity in identifying MTurk workers’ jargon terms and found high agreement. These results suggest that personalized technologies such as NoteAid are needed, where specific results are identified from a wider database; a general consensus on what is or is not jargon may lead to the exclusion of terms that require definitions for a subset of the population. Also, based on the MTurk worker health literacy scores, the average MTurk worker’s reading level was likely higher than fourth grade, the target level for NoteAid, which is consistent with previous work . Therefore, the 25.6% jargon proportion among MTurk workers likely underestimates the prevalence of jargon terms for NoteAid’s target population. Our beta regression model did not find differences between demographic subgroups in the proportion of terms identified as jargon. The only group that was significantly less likely to identify a term as jargon was the group for which English was not their native language. However, the small sample for this subgroup makes interpretation difficult. For the subgroups of adequate sample size, these results suggest that the selection of jargon by the NoteAid definition writers is sufficient. In this work, MTurk workers and definition writers selected jargon terms from actual deidentified EHR notes. Most existing work looks at identifying jargon more generally, using data from web search logs or web forums. By using EHR notes for our task, the jargon identified should be more relevant for the downstream task of presenting definitions to patients looking at their own EHR notes. There are several limitations to this work that can inform future research. First, we examined a relatively small number of passages in this experiment. A different selection of passages could have produced different jargon identifications among the MTurk workers and definition writers. A second limitation is that we did not examine context dependency in this study. For example, the term “tips” can be either nonjargon (suggestions) or jargon (transjugular intrahepatic portosystemic shunt). The NoteAid system considers context-dependent meanings, and future studies could address this. Another limitation concerns our lay population. While MTurk is often used for crowdsourced data collection , the demographic characteristics of the collected sample are not representative of the broader US population . In particular, we are interested in jargon identification behaviors for individuals with low health literacy, while the MTurk workers in our study generally had high health literacy scores. Of note is that the proportion of terms identified as jargon by MTurk workers in the lowest health literacy groups diverged from that of the higher health literacy groups. However, these sample sizes were very small, making interpretation difficult. Future work should attempt to replicate the results in actual patients, as in our previous work . Using our demographic and other variables, the beta regression model only explained a small proportion of the variability in jargon identification among the MTurk workers. It is possible that there are unmeasured variables that would account for additional variability, such as income, occupation, personal experience with health issues, or interest in health topics. The NoteAid definition-writing process is a distributed task. Each definition writer works on separate notes to identify and define jargon terms. Therefore, a consistent understanding of what is or is not jargon is important to ensure consistent coverage across the notes. Future development of NoteAid can investigate automatic jargon identification for definition writers through natural language processing tools, using the corpus of human-identified jargon as training data may lead to a more effective automated system if those data are consistent. In particular, with the growing impact of large language models , there is an opportunity to leverage large language models to improve patient understanding of notes. Future work can also use user information such as health literacy level and demographic information to identify the most relevant jargon terms and definitions, making the system even more personalized. Lastly, updating annotator instructions to be in line with established jargon classification frameworks can enforce consistency in labeling . There are other applications for jargon identification, such as clinical trial regulations requiring plain language summaries . This work can also inform jargon identification in other fields, such as law. A NoteAid-like tool for jargon identification and definition could define technical legal terms for lay individuals as they encounter them on the Web (eg, when reading a contract or terms of service agreement). In this work, we have shown that trained definition writers could consistently select jargon terms for which laypeople need definitions. These results are encouraging for the continued development of NoteAid, and they have implications for other fields.
Lung Cancer Screening Knowledge in Four Internal Medicine Programs
7195ed34-b782-4eca-8923-0cef7bcb1a7f
10009012
Internal Medicine[mh]
The mortality burden of lung cancer in the United States remains elevated despite reducing smoking rates and better treatments. An initial lung cancer presentation in advanced stages is common due to the asymptomatic nature of most early stage disease. The last decade has seen a substantial increase in lung cancer screening (LCS) centers, stemming from multiple societal guidelines as early as 2013 cataloging low-dose computed tomography (LDCT) as a life-saving intervention in certain populations. However, less than 5% of eligible persons are being screened in the United States as of 2017. , In Canada, pilot feasibility studies are underway for a federal screening program. In the past 5 years, a Europe-wide policy to implement LCS has been developed with a focus on risk stratification, appropriate CT protocols, and smoking cessation. Data from the MILD/NELSON trials have also shown a continuous benefit the longer period of time LCS is implemented. Despite the call for screening, many physicians are unaware of the efficacy of LDCT. Furthermore, LDCT may not be appropriately recommended to high-risk populations seen in primary care clinics, where future primary care physicians (PCPs) are currently training. In 2018, it was estimated that residents may be involved in up to 30% of primary care clinics nationwide. Factors such as early detection, low harm in screening, and trust in the referring physician have shown to affect a patient’s preference for LCS. Physicians were concerned about the effectiveness of the test, the cost to the patient, and the possible harm from subsequent interventions. However, many of them still ordered chest x-rays, which is ineffective at reducing lung cancer mortality. Among internal medicine (IM) residents, knowledge about who the high-risk population appropriate for LCS and the effectiveness of LDCT to reduce mortality have been identified to be major barriers in recommending this measure. In the US Midwest, smoking rates continue to be higher than the national mean. In a recent review of 10 states, LCS rates ranged from 9–17%, including only 10.5% in 2 Midwestern U.S. states. Targeting high-risk populations in these states, recognizing knowledge gaps, and developing curricula to support its prospective preventative health physicians may prove valuable to reduce lung cancer mortality rates. The principal aim of this study is to evaluate LCS knowledge among IM residents from 4 residency programs in the US Midwest, where the total outpatient primary care visits are estimated to be 20 million every year. The 2013 USPSTF recommendation rationale was the source material to assess the knowledge base of IM residents. Eligible participants were identified through their respective program’s residency leadership. Eligible participants had to be: a) active IM residents or medicine-pediatrics residents as of March of 2019. These residents were training in programs located in Indiana, Michigan, Nebraska, and Illinois. Data collection started in June of 2019 and stopped in January of 2020. The survey sought primarily to evaluate general knowledge. This was a composite variable calculated based on total number of correct responses divided by total number of questions. Additionally, it specifically measured: a) age and smoking history group eligibility, b) cancer specific and overall mortality benefit, c) populations that benefit the most from screening, d) mortality benefit of lung cancer screening with LDCT compared to mammogram and colonoscopy, and e) self-perceived LCS knowledge. Prior to taking the survey, residents were asked not to review the literature on LCS. The survey was distributed using REDCap (Research Electronic Data Capture) via email containing a public hyperlink leading to an online form. It was sent weekly to all residents and distributed by the authors and their respective program’s coordinators. REDCap is an online software toolset for electronic collection and management of research data. Data was hosted at Indiana University. The study received Ethics exemption from the office of research compliance at Indiana University (protocol #1904577492A001) because it involved research that only included interactions involving educational tests, survey procedures, interview procedures, or observation of public behavior. Data were analyzed using STATA 14. Descriptive statistics were used to stratify residents by post-graduate year (PGY). Statistical significance was set at P < .05 and was analyzed using Student’s t test and chi-square test, as appropriate. Forty-six percent (166/360) of residents responded to the survey. The distribution was 42%, 30%, and 28% among PGY-1, PGY-2, and PGY- 3, respectively. The distribution per program was 37%, 15%, 28%, and 20%, respectively. A 2.9/7 (43.1%) general knowledge score was attained among all surveyed. Programs’ general knowledge ranged between 30% and 55% with no statistical significance among them (ρ = .56). General knowledge was statistically significantly better among PGY-1 (42%) outperforming PGY-2 and PGY-3 (30% and 28%, respectively; ρ = .022). Approximately one third of residents across all training years and programs correctly identified the target population for LCS. More than 90% of all respondents agreed that LCS improves cancer-specific mortality. Regarding all-cause mortality 64% of PGY-1 thought it improved it, whereas only 55% of PGY-2 and 38% of PGY-3 concurred . Eighty-three percent of Program 2 residents correctly answered that LDCT results in an all-cause mortality benefit, although only half of residents in the other programs answered this correctly. When comparing the reduction in cancer-specific mortality between LDCT and colonoscopy and mammogram, there were statistical differences between the programs . Two thirds of residents perceived their knowledge to be equal or less than 50%. There were no differences in perceived knowledge between PGY or programs . According to this study, the knowledge of at-risk populations and impact of LDCT on mortality was low amongst IM residents at 4 large training programs in the Midwest U.S. This result is consistent with the finding that, as of 2017, less than 5% of the high risk for lung cancer population are being screened with LDCT in the United States. Improvement in screening rates for high-risk populations requires an improved knowledge base of future primary care physicians, who are most likely to recommend screening modalities for their patients. CMS has established the age range to be considered for LCS. This age range varies slightly from the landmark NLST trial (55–77 instead of 55–80). The reported lung cancer-specific mortality benefits by Pastorino et al and Becker et al were in the 20–39% range. Most of our respondents selected a lower lung cancer-specific mortality rate benefit, which may inform their decision on whether to recommend the intervention. In the NELSON and MILD trials, women benefited significantly more than men. Fewer than 10% of our respondents were aware of this finding. When we consider the trend in smoking behavior among women compared to men, women may represent a population at overall higher risk of developing lung cancer in the future. LDCT carries risks, especially in lower-risk populations. A veteran’s administrations study showed a high risk of false positives and an increase rate of this in low-risk population screening. It was suggested this may decrease the risk-benefit ratio for LCS. A different study found that patients who underwent LCS were more likely to continue smoking, possibly because of a false sense of security given by negative screening exams. Notwithstanding, it remains an internationally recommended method of screening. The national smoking rate in the United States is 16.7%. In the United States Midwest, smoking rates are 18.2%, only surpassed by the US South (18.8%). Interventions for early detection of lung cancer are essential to reduce mortality in these areas. The USPSTF evidence review suggested LDCT and mammogram in women aged 50–59 may have a comparable number needed to screen (NNS) to prevent one death. Based on this metric, LDCT outperformed mammogram and underperformed colonoscopy. Our respondents all agreed that they perceived LDCT to need more patients to prevent one death compared to the other two interventions. We believe that this assessment is consistent with an increased skepticism to new interventions. This is the first study to evaluate the multi-institutional knowledge of lung cancer screening among internal medicine residents. Similar studies in practicing primary care providers or residents echo these findings. The trend for PGY-1 to outperform PGY-2 and PGY-3 residents was consistent among all programs, ratifying a previously seen trend. This may be partially explained by increased motivation or recent medical school curricula or early residency training covering LCS recommendations. There was also a not statistically significantly higher proportion of PGY-1 residents in the sample analyzed. There are several limitations to this study. Our response group may be more motivated, increasing their willingness to respond and engage with the survey. This may skew the results to better overall knowledge—a concerning hypothesis. All residents must be exposed to primary care settings during their training as required by the American Board of Internal Medicine. However, in university-based programs, most of the residents may decide to go into subspecialty training. Additionally, the lack of time limit in for survey response may have allowed for literature review with no feasible way to control for this. Furthermore, knowledge may not be the only factor in preventing LCS recommendations. Many factors derive from patients, providers, system, and insurance characteristics which may be suboptimal to promote preventive care. Trainees providing primary care have a fundamental role in preventative health. Lung cancer screening knowledge in all respondents was unacceptably low. In their knowledge self-assessment, most were aware of their deficiencies. Early year residents performed better than their seniors. Uninformed skepticism and knowledge gaps continue to be significant barriers in recommending lung cancer screening.
Construction and practice of a novel pharmaceutical health literacy intervention model in psychiatric hospital
cb165653-9c01-4b37-bedc-d46504b3c5f1
11498655
Health Literacy[mh]
In October 2020, the National Health Commission issued "Opinions on Strengthening and Improving Psychiatric Medical Services", which emphasized the importance of enhancing the psychiatric medical system, building a comprehensive service network, and accelerating pharmacy transformation . As part of this effort, pharmaceutical health literacy intervention (PHLI) plays a crucial role in improving patients’ ability to understand drug information and make informed choices. In psychiatric settings, it is equally important to improve the willingness of patients with potential mental illness to seek treatment . The concept of PHLI was first introduced by D. K. Theo Raynor, aiming to provide better medication-related services for populations with low health literacy . Effective communication of simple and understandable medical information, both in writing and orally, is crucial for treatment success, especially when explaining medication use to discharged patients . Sauceda et al. defined pharmaceutical health literacy as the ability to critically acquire, understand, and use basic medication information, thereby reducing medication errors arising from misunderstandings . Building on this, Pouliot et al. refined the definition through an international expert consensus using the Delphi method. This refined definition emphasizes the ability to obtain, understand, communicate, calculate, and process specific medication information, enabling individuals to make informed decisions about their medication and health, thereby ensuring safe and effective medications use . PHLI measures typically include teach-back methods, the use of simple language, and the chunk-check technique. The term "chunk-check", commonly used in the IT field, refers in health literacy to breaking down complex medical information into manageable chunks and gradually checking patient comprehension . It includes the following steps: breaking down complex information into chunks, evaluating the reasonableness of the chunks, and assessing the effectiveness of information delivery. With advancements in technology, PHLI increasingly relies on medical audiovisual tools and digital medication therapy management . These technological innovations enable more interactive and engaging educational approaches, such as the use of multimedia content to explain complex drug regimens, which is beneficial for patients with varying levels of health literacy and cognitive function. A review of 72 articles involving population-wide surveys in Australia highlighted that health literacy intervention can significantly enhance the understanding of health information and recommendations among individuals with low health literacy, thereby improving treatment outcomes . Moreover, numerous studies have confirmed that psychiatric health literacy intervention can improve treatment outcomes and alleviate symptoms . During the hospitalization of patients with mental illness, implementing health literacy interventions, including PHLI, is both convenient and effective. Additionally, PHLI broadens the scope of pharmaceutical care. A lack of PHLI can significantly reduce medication adherence, particularly among psychiatric patient post-discharge, leading to uncertainties in treatment management and a substantial increase in the recurrence of mental illness. While PHLI have been explored globally, such as using visual aids for illiterate patients with prescription instructions , they are not directly transferable to China due to differences in medical systems. China’s health literacy intervention began later than those in other countries, with varying patient health literacy levels and no unified practice. Additionally, disparities in pharmaceutical expertise, pharmacist availability, and Internet penetration have hindered the development of comprehensive PHLI in psychiatric care. Currently, PHLI in China relies mainly on community-based efforts like free clinics and lectures, but a more proactive, integrated approach is needed throughout the patient care process. A multicenter study conducted in mainland China found that caregivers of children are highly concerned about medication safety and are willing to learn about related topics . Similarly, a study conducted in Finland and Malta identified a paradox where patients have low health literacy but a strong desire to participate in treatment decisions . An observational study further demonstrated that elderly asthma patients wish to understand medication information but often struggle to obtain appropriate, personalized guidance . Cross-sectional studies involving populations such as those with hypertension , outpatient care institutions , dialysis patients , and individuals with coronary heart disease consistently indicate low levels of pharmaceutical health literacy. These findings underscore a critical challenge for pharmacists: bridging the gap between the specialized nature of pharmaceutical services and the general public’s understanding. Since the determination of treatment plans requires informed decisions from both patients and their families, good health literacy is essential for making well-informed medical choices. Therefore, improving pharmaceutical health literacy is a crucial strategy for enhancing patient decision-making and treatment outcomes. Most published articles focus on the development of pharmaceutical literacy assessment tools , the measurement of pharmaceutical health literacy , the role of pharmaceutical health literacy in preventing chronic diseases and influencing treatment adherence , and pharmaceutical health literacy education. However, to our knowledge, no studies have addressed the practical implementation of pharmaceutical health literacy. Additionally, pharmaceutical health literacy reflects patients’ medication capabilities and attitudes to some extent . More importantly, it facilitates a shift in the patient’s role in healthcare—from mere complying with expert guidelines to making informed decisions . Therefore, it is essential to actively promote pharmaceutical health literacy through practical and modular approaches. A survey on the mental health literacy of the Chinese population from 1997 to 2018 concluded that more high-quality health literacy interventions are necessary to improve health literacy and promote positive changes in medical behavior in China . This need is particularly critical for vulnerable groups, such as patients with mental illnesses and their caregivers, for whom feasible PHLI measures are currently lacking. Therefore, developing a PHLI model for psychiatric hospital is essential to provide technical support for these interventions and improve medical outcomes. Building on previous practical experience, our hospital, a key psychiatric institution in the Yangtze River Delta region, established a PHLI model in 2022 with the aim of creating a comprehensive PHLI workflow and enhancing the quality of pharmaceutical care. 2.1 Organizational structure design PHLI primarily targets patients with mental illness and their caregivers, while also actively engaging the general population. To support this initiative, an interdisciplinary team comprising the pharmacy department, medical department, mental illness control center, nursing department, and community committee was established, leveraging the hospital’s information system. The model included the participation of five clinical pharmacists, all of whom had completed clinical pharmacist training accredited by the National Health Commission and received certification. To guide the implementation of PHLI, the "Standard Operating Procedure for Pharmaceutical Management and Pharmaceutical Services at The Affiliated Mental Health Center of Jiangnan University" was developed. Prior to launching the intervention model, targeted training in health literacy interventions—covering essential methods such as teach-back, chunk-check, and the use of simple language—was provided to ensure standardization and accuracy. All personnel reported to the chief pharmacist, and the organizational structure is illustrated in . 2.2 Intervention model design 2.2.1 Inpatient-based PHLI mode The inpatient-based PHLI mode structured into four main parts. The first part involved medication reconciliation and health literacy intervention at the time of admission. Pharmacists conducted medication reconciliation, explained common adverse reactions, and provided detailed information on the function, usage, and dosage of each medication. For patients with low health literacy, PHLI was administered weekly, focusing on drug effects, proper medication methods, and how to response to adverse reactions. Additionally, real-time documentation was maintained through the internal file system, including records of pharmaceutical care, medication use, and pharmaceutical ward rounds. The second part was the intervention during hospitalization. Throughout the treatment period, pharmacists addressed patients’ questions about prescription. For patients with special conditions—such as a history of drug abuse, refusal to take medication, or a history of adverse reactions—pharmacists conducted targeted interventions and reported these cases to the chief pharmacist. Following consultation, individualized intervention strategies were developed. The third part focused on patient discharge. After assessing the patient’s discharge medication, pharmacists created an educational list detailing the diagnosis, dosage and administration instructions, potential adverse reactions, and corresponding countermeasures. This list provided targeted education for patients with multiple conditions, assisting them and their families in developing a post-discharge pharmaceutical plan to ensure continuity and stability of treatment. The fourth part involved a retrospective analysis of the hospital information systems (HIS). HIS is a comprehensive platform for hospital information management in China, where all patient information and medical data are stored. Through HIS, after excluding patients currently under intervention, those older than 65 years, with multiple diseases, or with more than five prescriptions were selected for focused re-rounds to confirm PHLI, particularly in areas such as medication adherence, prescription complexity, and adverse reactions. Other inpatients were screened monthly. 2.2.2 Outpatient-based PHLI mode The outpatient-based PHLI mode was divided into two parts. The first part involved interventions through pharmacist-managed clinics (PMC). PMC are outpatient clinics led by clinical pharmacists, providing services such as PHLI, medication therapy management, drug consultation, medication education, and guidance on medication safety. During the medication dispensing process, pharmacists address patients’ pharmaceutical concerns in real time. For patients with specific needs, pharmacists may introduce them to the PMC for targeted intervention. Clinical pharmacists in the PMC create pharmaceutical profiles for patients, recording details such as gender, age, diagnosis, medical history, allergies, and drug usage and dosage. After addressing the patients’ concerns and taking necessary actions, a follow-up appointment is scheduled to ensure medication compliance. The second part focused on interventions based on therapeutic drug monitoring (TDM) and genetic testing. Pharmacists provided specific interventions, including explanations of professional terminology, the significance of TDM, and the implications of genetic testing results on treatment. Patients with abnormal results, such as medication concentrations exceeding the warning levels or unique genotypes, were referred to the PMC to prevent serious non-compliance due to insufficient health literacy . 2.2.3 Internet+ based PHLI mode This mode comprised three main parts. The first part involved interventions through Internet information subscription via WeChat public accounts. WeChat public accounts are the most widely used information subscription platforms in China, with nearly all medical institutions having established accounts for appointment and information dissemination. Patients can easily access hospital information, learn about expert teams, receive health services, make appointment bookings, and complete online consultations as well as payment and reimbursement services through the WeChat public account. In addition, hospitals use these accounts to publish popular science articles, authored by various professionals including doctors, pharmacists, nurses, laboratory technicians, and psychologists, to educate patients and the public in simple, understandable language. Articles by pharmacists focus on PHLI topics, such as the importance of TDM for patients with mental health disorders. When necessary, pharmacists may break down comprehensive pharmaceutical articles into a series of shorter articles (chunks) to cover the related knowledge thoroughly. Followers of the public account can read these articles and pose questions through comments. In response, pharmacists can create short videos, reply to comments, and generate additional public account articles based on readers’ inquiries. The second part involved interventions through an Internet hospital. With our hospital’s Internet medical services, patients can receive medical advice from home. Although online consultations do not allow for face-to-face interactions, pharmacists can provide patients in need with online health literacy support. Weekly summaries of consultation queries were reviewed by pharmacists, and as a result, frequently asked questions were incorporated into the pharmacy-related popularization on the public account. The third part involved interventions based on instant messaging software such as WeChat. While WeChat public accounts are institutional platforms, WeChat itself is a personal messaging tool. Many older adults in China, despite found online consultations challenging, use WeChat regularly. This platform enables patients to establish direct, one-on-one communication with pharmacists for PHLI. Pharmacists can review medication information and provide medication monitoring for patients with poor adherence or chronic conditions when necessary. The frequency of interactions is uncertain, as it depends on the initiation form of PHLI and the patient’s physical condition. This two-way communication simplifies interactions and allows pharmacists to dynamically monitor medication behaviors and offer real-time PHLI. More importantly, interventions based on WeChat and similar instant messaging software offer a novel perspective for addressing the dilemma of limited medical resources and improving healthcare access for older adults. 2.2.4 Community-based PHLI mode The community-based PHLI mode primarily included community lectures and free clinic. Community lectures were organized through the collaborative efforts of medical associations. In China, a medical association is a consortium that brings together a large hospital and several community hospitals, characterized by shared medical resources and collaborative healthcare initiatives. As a supplement to the Internet+ PHLI mode, pharmacists delivered lectures to community residents, offering one-on-one counseling sessions after the lecture to address individual questions regarding drug use, pharmaceutical knowledge, and related topics. Additionally, pharmacists assisted in managing chronic illness medications and household medicine cabinets. Pharmacists also participated in large-scale free clinics held annually. In China, public welfare events known as free clinics are frequently organized by healthcare institutions, government bodies, and party organizations during holidays or important commemorative days. These free clinics are not limited to specific locations; they are held in various public spaces such as plazas, hospital lobbies, community centers, and nursing homes. Renowned experts provide free medical consultations and treatments to local patients, particularly those with limited access to medical care. Pharmacists involved in these free clinics conducted PHLI for individuals with low health literacy to ensure the effectiveness of their treatment . 2.3 Data statistics and analysis The distribution of variables was assessed using descriptive statistical methods. Data management and analysis were conducted using Microsoft Excel 2016 and SPSS 24.0, respectively. Categorical data were described in terms of frequencies and percentages. Percentages were calculated by dividing the number of cases by the total number of cases for the year or the two-year period, and results were rounded to two decimal places. 2.4 Ethics approval and consent to participate The study was conducted in accordance with the Declaration of Helsinki, and was approved by the Ethics Committee of Wuxi Mental Health Center (WXMHCIRB2023LLky007). All participants signed informed consent and the data were analyzed anonymously. 2.5 Data acquisition and participant information The data was accessed on January 10, 2024, and the information of individual participants cannot be identified during or after data collection. PHLI primarily targets patients with mental illness and their caregivers, while also actively engaging the general population. To support this initiative, an interdisciplinary team comprising the pharmacy department, medical department, mental illness control center, nursing department, and community committee was established, leveraging the hospital’s information system. The model included the participation of five clinical pharmacists, all of whom had completed clinical pharmacist training accredited by the National Health Commission and received certification. To guide the implementation of PHLI, the "Standard Operating Procedure for Pharmaceutical Management and Pharmaceutical Services at The Affiliated Mental Health Center of Jiangnan University" was developed. Prior to launching the intervention model, targeted training in health literacy interventions—covering essential methods such as teach-back, chunk-check, and the use of simple language—was provided to ensure standardization and accuracy. All personnel reported to the chief pharmacist, and the organizational structure is illustrated in . 2.2.1 Inpatient-based PHLI mode The inpatient-based PHLI mode structured into four main parts. The first part involved medication reconciliation and health literacy intervention at the time of admission. Pharmacists conducted medication reconciliation, explained common adverse reactions, and provided detailed information on the function, usage, and dosage of each medication. For patients with low health literacy, PHLI was administered weekly, focusing on drug effects, proper medication methods, and how to response to adverse reactions. Additionally, real-time documentation was maintained through the internal file system, including records of pharmaceutical care, medication use, and pharmaceutical ward rounds. The second part was the intervention during hospitalization. Throughout the treatment period, pharmacists addressed patients’ questions about prescription. For patients with special conditions—such as a history of drug abuse, refusal to take medication, or a history of adverse reactions—pharmacists conducted targeted interventions and reported these cases to the chief pharmacist. Following consultation, individualized intervention strategies were developed. The third part focused on patient discharge. After assessing the patient’s discharge medication, pharmacists created an educational list detailing the diagnosis, dosage and administration instructions, potential adverse reactions, and corresponding countermeasures. This list provided targeted education for patients with multiple conditions, assisting them and their families in developing a post-discharge pharmaceutical plan to ensure continuity and stability of treatment. The fourth part involved a retrospective analysis of the hospital information systems (HIS). HIS is a comprehensive platform for hospital information management in China, where all patient information and medical data are stored. Through HIS, after excluding patients currently under intervention, those older than 65 years, with multiple diseases, or with more than five prescriptions were selected for focused re-rounds to confirm PHLI, particularly in areas such as medication adherence, prescription complexity, and adverse reactions. Other inpatients were screened monthly. 2.2.2 Outpatient-based PHLI mode The outpatient-based PHLI mode was divided into two parts. The first part involved interventions through pharmacist-managed clinics (PMC). PMC are outpatient clinics led by clinical pharmacists, providing services such as PHLI, medication therapy management, drug consultation, medication education, and guidance on medication safety. During the medication dispensing process, pharmacists address patients’ pharmaceutical concerns in real time. For patients with specific needs, pharmacists may introduce them to the PMC for targeted intervention. Clinical pharmacists in the PMC create pharmaceutical profiles for patients, recording details such as gender, age, diagnosis, medical history, allergies, and drug usage and dosage. After addressing the patients’ concerns and taking necessary actions, a follow-up appointment is scheduled to ensure medication compliance. The second part focused on interventions based on therapeutic drug monitoring (TDM) and genetic testing. Pharmacists provided specific interventions, including explanations of professional terminology, the significance of TDM, and the implications of genetic testing results on treatment. Patients with abnormal results, such as medication concentrations exceeding the warning levels or unique genotypes, were referred to the PMC to prevent serious non-compliance due to insufficient health literacy . 2.2.3 Internet+ based PHLI mode This mode comprised three main parts. The first part involved interventions through Internet information subscription via WeChat public accounts. WeChat public accounts are the most widely used information subscription platforms in China, with nearly all medical institutions having established accounts for appointment and information dissemination. Patients can easily access hospital information, learn about expert teams, receive health services, make appointment bookings, and complete online consultations as well as payment and reimbursement services through the WeChat public account. In addition, hospitals use these accounts to publish popular science articles, authored by various professionals including doctors, pharmacists, nurses, laboratory technicians, and psychologists, to educate patients and the public in simple, understandable language. Articles by pharmacists focus on PHLI topics, such as the importance of TDM for patients with mental health disorders. When necessary, pharmacists may break down comprehensive pharmaceutical articles into a series of shorter articles (chunks) to cover the related knowledge thoroughly. Followers of the public account can read these articles and pose questions through comments. In response, pharmacists can create short videos, reply to comments, and generate additional public account articles based on readers’ inquiries. The second part involved interventions through an Internet hospital. With our hospital’s Internet medical services, patients can receive medical advice from home. Although online consultations do not allow for face-to-face interactions, pharmacists can provide patients in need with online health literacy support. Weekly summaries of consultation queries were reviewed by pharmacists, and as a result, frequently asked questions were incorporated into the pharmacy-related popularization on the public account. The third part involved interventions based on instant messaging software such as WeChat. While WeChat public accounts are institutional platforms, WeChat itself is a personal messaging tool. Many older adults in China, despite found online consultations challenging, use WeChat regularly. This platform enables patients to establish direct, one-on-one communication with pharmacists for PHLI. Pharmacists can review medication information and provide medication monitoring for patients with poor adherence or chronic conditions when necessary. The frequency of interactions is uncertain, as it depends on the initiation form of PHLI and the patient’s physical condition. This two-way communication simplifies interactions and allows pharmacists to dynamically monitor medication behaviors and offer real-time PHLI. More importantly, interventions based on WeChat and similar instant messaging software offer a novel perspective for addressing the dilemma of limited medical resources and improving healthcare access for older adults. 2.2.4 Community-based PHLI mode The community-based PHLI mode primarily included community lectures and free clinic. Community lectures were organized through the collaborative efforts of medical associations. In China, a medical association is a consortium that brings together a large hospital and several community hospitals, characterized by shared medical resources and collaborative healthcare initiatives. As a supplement to the Internet+ PHLI mode, pharmacists delivered lectures to community residents, offering one-on-one counseling sessions after the lecture to address individual questions regarding drug use, pharmaceutical knowledge, and related topics. Additionally, pharmacists assisted in managing chronic illness medications and household medicine cabinets. Pharmacists also participated in large-scale free clinics held annually. In China, public welfare events known as free clinics are frequently organized by healthcare institutions, government bodies, and party organizations during holidays or important commemorative days. These free clinics are not limited to specific locations; they are held in various public spaces such as plazas, hospital lobbies, community centers, and nursing homes. Renowned experts provide free medical consultations and treatments to local patients, particularly those with limited access to medical care. Pharmacists involved in these free clinics conducted PHLI for individuals with low health literacy to ensure the effectiveness of their treatment . The inpatient-based PHLI mode structured into four main parts. The first part involved medication reconciliation and health literacy intervention at the time of admission. Pharmacists conducted medication reconciliation, explained common adverse reactions, and provided detailed information on the function, usage, and dosage of each medication. For patients with low health literacy, PHLI was administered weekly, focusing on drug effects, proper medication methods, and how to response to adverse reactions. Additionally, real-time documentation was maintained through the internal file system, including records of pharmaceutical care, medication use, and pharmaceutical ward rounds. The second part was the intervention during hospitalization. Throughout the treatment period, pharmacists addressed patients’ questions about prescription. For patients with special conditions—such as a history of drug abuse, refusal to take medication, or a history of adverse reactions—pharmacists conducted targeted interventions and reported these cases to the chief pharmacist. Following consultation, individualized intervention strategies were developed. The third part focused on patient discharge. After assessing the patient’s discharge medication, pharmacists created an educational list detailing the diagnosis, dosage and administration instructions, potential adverse reactions, and corresponding countermeasures. This list provided targeted education for patients with multiple conditions, assisting them and their families in developing a post-discharge pharmaceutical plan to ensure continuity and stability of treatment. The fourth part involved a retrospective analysis of the hospital information systems (HIS). HIS is a comprehensive platform for hospital information management in China, where all patient information and medical data are stored. Through HIS, after excluding patients currently under intervention, those older than 65 years, with multiple diseases, or with more than five prescriptions were selected for focused re-rounds to confirm PHLI, particularly in areas such as medication adherence, prescription complexity, and adverse reactions. Other inpatients were screened monthly. The outpatient-based PHLI mode was divided into two parts. The first part involved interventions through pharmacist-managed clinics (PMC). PMC are outpatient clinics led by clinical pharmacists, providing services such as PHLI, medication therapy management, drug consultation, medication education, and guidance on medication safety. During the medication dispensing process, pharmacists address patients’ pharmaceutical concerns in real time. For patients with specific needs, pharmacists may introduce them to the PMC for targeted intervention. Clinical pharmacists in the PMC create pharmaceutical profiles for patients, recording details such as gender, age, diagnosis, medical history, allergies, and drug usage and dosage. After addressing the patients’ concerns and taking necessary actions, a follow-up appointment is scheduled to ensure medication compliance. The second part focused on interventions based on therapeutic drug monitoring (TDM) and genetic testing. Pharmacists provided specific interventions, including explanations of professional terminology, the significance of TDM, and the implications of genetic testing results on treatment. Patients with abnormal results, such as medication concentrations exceeding the warning levels or unique genotypes, were referred to the PMC to prevent serious non-compliance due to insufficient health literacy . This mode comprised three main parts. The first part involved interventions through Internet information subscription via WeChat public accounts. WeChat public accounts are the most widely used information subscription platforms in China, with nearly all medical institutions having established accounts for appointment and information dissemination. Patients can easily access hospital information, learn about expert teams, receive health services, make appointment bookings, and complete online consultations as well as payment and reimbursement services through the WeChat public account. In addition, hospitals use these accounts to publish popular science articles, authored by various professionals including doctors, pharmacists, nurses, laboratory technicians, and psychologists, to educate patients and the public in simple, understandable language. Articles by pharmacists focus on PHLI topics, such as the importance of TDM for patients with mental health disorders. When necessary, pharmacists may break down comprehensive pharmaceutical articles into a series of shorter articles (chunks) to cover the related knowledge thoroughly. Followers of the public account can read these articles and pose questions through comments. In response, pharmacists can create short videos, reply to comments, and generate additional public account articles based on readers’ inquiries. The second part involved interventions through an Internet hospital. With our hospital’s Internet medical services, patients can receive medical advice from home. Although online consultations do not allow for face-to-face interactions, pharmacists can provide patients in need with online health literacy support. Weekly summaries of consultation queries were reviewed by pharmacists, and as a result, frequently asked questions were incorporated into the pharmacy-related popularization on the public account. The third part involved interventions based on instant messaging software such as WeChat. While WeChat public accounts are institutional platforms, WeChat itself is a personal messaging tool. Many older adults in China, despite found online consultations challenging, use WeChat regularly. This platform enables patients to establish direct, one-on-one communication with pharmacists for PHLI. Pharmacists can review medication information and provide medication monitoring for patients with poor adherence or chronic conditions when necessary. The frequency of interactions is uncertain, as it depends on the initiation form of PHLI and the patient’s physical condition. This two-way communication simplifies interactions and allows pharmacists to dynamically monitor medication behaviors and offer real-time PHLI. More importantly, interventions based on WeChat and similar instant messaging software offer a novel perspective for addressing the dilemma of limited medical resources and improving healthcare access for older adults. The community-based PHLI mode primarily included community lectures and free clinic. Community lectures were organized through the collaborative efforts of medical associations. In China, a medical association is a consortium that brings together a large hospital and several community hospitals, characterized by shared medical resources and collaborative healthcare initiatives. As a supplement to the Internet+ PHLI mode, pharmacists delivered lectures to community residents, offering one-on-one counseling sessions after the lecture to address individual questions regarding drug use, pharmaceutical knowledge, and related topics. Additionally, pharmacists assisted in managing chronic illness medications and household medicine cabinets. Pharmacists also participated in large-scale free clinics held annually. In China, public welfare events known as free clinics are frequently organized by healthcare institutions, government bodies, and party organizations during holidays or important commemorative days. These free clinics are not limited to specific locations; they are held in various public spaces such as plazas, hospital lobbies, community centers, and nursing homes. Renowned experts provide free medical consultations and treatments to local patients, particularly those with limited access to medical care. Pharmacists involved in these free clinics conducted PHLI for individuals with low health literacy to ensure the effectiveness of their treatment . The distribution of variables was assessed using descriptive statistical methods. Data management and analysis were conducted using Microsoft Excel 2016 and SPSS 24.0, respectively. Categorical data were described in terms of frequencies and percentages. Percentages were calculated by dividing the number of cases by the total number of cases for the year or the two-year period, and results were rounded to two decimal places. The study was conducted in accordance with the Declaration of Helsinki, and was approved by the Ethics Committee of Wuxi Mental Health Center (WXMHCIRB2023LLky007). All participants signed informed consent and the data were analyzed anonymously. The data was accessed on January 10, 2024, and the information of individual participants cannot be identified during or after data collection. 3.1 Number of reported PHLI cases presents the number of interventions and the total patient population across different methods. Among these, 386 PHLI cases were based on inpatient care, accounting for 60.69%. PHLI through PMC identified 65 cases, representing 10.22%, while TDM and genetic testing contributed 42 cases, accounting for 6.60%. Internet information subscription uncovered 91 cases, making up 14.31%. The number of PHLI cases via Internet hospitals was 28 (4.40%), and 7 cases were identified through instant messaging software (1.10%). Additionally, 17 cases were reported through community-based PHLI, accounting for 2.67%. (See for original data). 3.2 Basic information In 2022, there were 114 PHLI cases involving males (42.07%) and 157 cases involving females (57.93%). In 2023, the number were 100 cases (38.91%) for males and 157 cases (61.09%) for females. Among both years, 200 PHLI cases involved patients aged 18 to 40, accounting for 37.88% . 3.3 PHLI type distribution The types of interventions primarily focused on adverse reactions (18.87%), dosage and administration (11.64%), and TDM (9.43%). In 2023, PHLI before pharmaceutical care accounted for 7.17% of all PHLI, with 23 cases. Patients were particularly concerned with drug usage and efficacy . 3.4 PHLI strategies The intervention strategies primarily focused on adverse reaction identification (10.22%), interpretation of pharmaceutical reports (7.23%), and routine examination reminders (6.45%). Ensuring medication compliance and managing adverse reactions were key areas of focus within these strategies . 3.5 Drug distribution displays the top 10 most frequently distributed medications within the PHLI model. presents the number of interventions and the total patient population across different methods. Among these, 386 PHLI cases were based on inpatient care, accounting for 60.69%. PHLI through PMC identified 65 cases, representing 10.22%, while TDM and genetic testing contributed 42 cases, accounting for 6.60%. Internet information subscription uncovered 91 cases, making up 14.31%. The number of PHLI cases via Internet hospitals was 28 (4.40%), and 7 cases were identified through instant messaging software (1.10%). Additionally, 17 cases were reported through community-based PHLI, accounting for 2.67%. (See for original data). In 2022, there were 114 PHLI cases involving males (42.07%) and 157 cases involving females (57.93%). In 2023, the number were 100 cases (38.91%) for males and 157 cases (61.09%) for females. Among both years, 200 PHLI cases involved patients aged 18 to 40, accounting for 37.88% . The types of interventions primarily focused on adverse reactions (18.87%), dosage and administration (11.64%), and TDM (9.43%). In 2023, PHLI before pharmaceutical care accounted for 7.17% of all PHLI, with 23 cases. Patients were particularly concerned with drug usage and efficacy . The intervention strategies primarily focused on adverse reaction identification (10.22%), interpretation of pharmaceutical reports (7.23%), and routine examination reminders (6.45%). Ensuring medication compliance and managing adverse reactions were key areas of focus within these strategies . displays the top 10 most frequently distributed medications within the PHLI model. With the rise of individualized treatment, medical strategies have become increasingly complex, making health literacy a critical factor in patients’ understanding of their care. This has garnered significant clinical attention . PHLI is designed to assist patients, particularly those with low health literacy, in addressing common clinical challenges such as medication adherence, chronic disease self-management, and post-discharge medication . For psychiatric patients and their caregivers, developing accurate PHLI is critical, covering aspects like medication timing, administration methods, managing non-compliance, and responding to adverse reactions . However, there is currently no comprehensive solution for practical implementation. Most existing models focus on intervention measure, outcomes, and methodological development, but lack a structured operational process based on hospital practice . Fortunately, China is increasingly prioritizing health literacy, highlighting the need to establish and enhance a PHLI network. Our hospital was the first to implement psychiatric PHLI, with limited prior experience to draw upon, and the PHLI model remains in its early stages. Nevertheless, the model we developed facilitates easier and more convenient patient interventions. It addresses potential gaps—such as overlooking patients with low health literacy due to high workload or staff shortages—through community lectures, Internet+, instant messaging software, and retrospective information systems analysis. This comprehensive approach aims to reach nearly all patients within our hospital. Psychiatric patients often exhibit low health literacy and significant non-compliance behaviors, including irregular medication use, even hiding medications, refusing to take medications, and drug abuse . The multi-method collaborative intervention strategy employed in our PHLI model is designed to comprehensively enhance patients’ pharmaceutical health literacy. Internet+ and community-based PHLI help compensate for follow-up losses due to limited pharmacist resources and ensure a broader reach. Post-intervention, pharmacists establish patient records and provide comprehensive pharmaceutical care. Additionally, our PHLI model integrates with the hospital’s adverse reaction and drug abuse monitoring systems, broadening the intervention scope through rescreening patients within these system . While full-time intervention pharmacists reduce communication gaps and waiting times, this approach demands significant manpower, resources and funding. Continuous improvement of the PHLI model is essential, along with the development of new interventions, such as joint physician-pharmacist interventions, chronic disease management models, and targeted interventions for the digital divide among older adults . The increasing proportion of PHLI conducted via Internet information subscription, particularly through platforms like WeChat public account articles, indicates the growing significance of these novel methods. These articles effectively simplify complex information by breaking it down into digestible chunks using simple language and multimedia elements, making them highly acceptable. Digital interventions often outperform traditional didactic approaches in terms of effectiveness . Additionally, the self-iterative nature of these platforms allows for ongoing refinement of article publishing strategy based on reader feedback. The predominance of young patients in PHLI may be attributed to their greater receptiveness to diverse forms of intervention and their need for higher health literacy to balance efficacy with adverse reactions. Adverse reactions remain a primary focus in PHLI, as they allow for direct identification and timely intervention of patient issues. Ensuring medication compliance and providing simple pharmaceutical guidance are central to PHLI efforts. The PHLI model developed in this study offers innovative approaches to addressing health literacy gap and provides more targeted interventions for patients. This model also contributes to reducing patient health management risks and is highly accessible. Its high replicability and reliance on routine technologies, which do not incur additional costs, suggest that it could be effectively implemented and refined in various countries and health systems . However, there are notable limitations, including the lack of health literacy classification, insufficient publicity, and variability in pharmacists’ professional skills. Follow-up research will further improve the PHLI model and incorporate more special intervention projects, expend the coverage of intervention, raise awareness of pharmaceutical care, optimize continuing education mechanisms, and establish a robust foundation for subsequent policy development. The PHLI model implemented at our hospital represents an innovation approach to health literacy management, enhancing both the operation and scope of PHLI. This model effectively addresses the increasing health literacy needs of the public, helps to narrow the health literacy gap, and enables timely intervene in the early stages of clinical practice. It also contributes to the provision of diverse clinical pharmaceutical services. S1 Table (XLS)
Mastering Your Fellowship: Part 1, 2023
e7f6055d-0a63-48ff-be90-4618b5c6cb9a
10157437
Family Medicine[mh]
A 70-year-old male patient presents with epistaxis to the emergency centre (EC). The patient is bleeding profusely, and the team cannot localise the source of the bleeding. The patient’s vital signs are as follows: blood pressure = 160/80 mmHg, pulse = 108 beats/min, respiratory rate = 24 breaths/min, temperature = 37.5 °C. He has no other evidence of bleeding. The patient has been pinching his nose for the last 10 minutes. The bleeding continues when the pressure is released. You note that the team on call is a community service medical officer and two interns. They phone you for advice at 23:00. What is the most appropriate next step? a) Administer intravenous tranexamic acid. b) Insert a compressed nasal sponge. c) Insert a Foley catheter and inflate. d) Lower the blood pressure. e) Pack the anterior nasal cavity with gauze. Answer: b) Model answers Epistaxis is a relatively common condition, although the actual incidence is unknown because most cases self-abort and are managed at home. Severe epistaxis requires prompt evaluation in the EC and appropriate resuscitation. A focused history noting the duration, severity of the haemorrhage and the side of initial bleeding. Enquire about previous epistaxis, hypertension, hepatic or another systemic disease, family history, easy bruising or prolonged bleeding after minor surgical procedures. Recurrent episodes of epistaxis, even if self-limited, should raise suspicion of significant nasal pathology. Use of medications, especially aspirin, nonsteroidal anti-inflammatory drugs, warfarin and heparin, should be documented, as these predispose to epistaxis. The examination using a light source is essential in establishing the point of bleeding. Applying vasoconstrictor drops may slow the bleeding, allowing for an accurate source assessment. Patients should be educated about first aid, which includes pinching the nose and applying an ice pack to the forehead while leaning forward. The relationship between hypertension and epistaxis is not well understood. Patients with epistaxis commonly present with elevated blood pressure. Epistaxis is more common in hypertensive patients due to long-standing vascular fragility. Hypertension, however, is rarely a direct cause of epistaxis. More commonly, epistaxis and the associated anxiety cause an acute elevation of blood pressure. Therefore, therapy should focus on controlling haemorrhages and reducing anxiety as the primary means of blood pressure reduction. Insert pledgets soaked with an anaesthetic-vasoconstrictor solution into the nasal cavity to anaesthetise and shrink nasal mucosa. Nasal packing is the usual practice in most settings in South Africa but is often poorly done and requires some skill. Packing is commonly performed incorrectly, using an insufficient amount of packing set primarily in the anterior naris. The gauze is a plug rather than a haemostatic pack when placed in this way. Physicians inexperienced in the proper gauze pack placement should use a nasal tampon or balloon instead. A compressed sponge (e.g. Merocel ® ) is trimmed to fit snugly through the naris. Moisten the tip with surgical lubricant or topical antibiotic ointment. Firmly grasp the length of the sponge with bayonet forceps, spread the naris vertically with a nasal speculum and advance the sponge along the floor of the nasal cavity. Once wet with blood or a small amount of saline, the sponge expands to fill the nasal cavity and tamponade bleeding. The procedure requires very little skill and is suitable for all levels of emergency care doctors. Another easy method of gaining control of bleeding in the anterior naris is nasal balloons, available in different lengths. A carboxymethyl cellulose outer layer promotes platelet aggregation. The balloons are as effective as nasal tampons, easier to insert and remove and more comfortable for the patient. To insert the balloon, soak its knit outer layer with water, insert it along the floor of the nasal cavity and inflate it slowly with air until the bleeding stops. These balloons are not readily available in most public sector hospitals in South Africa. Further reading Naidoo M. Chapter 88: How to manage epistaxis. In: Mash B, et al, editors. South African Family Practice Manual. 4th ed. Braamfontein: Van Schaik; In press 2023. Traboulsi H, Alam E, Hadi U. Changing trends in the management of epistaxis. Int J Otolaryngol. 2015: 2015;263987. https://doi.org/10.1155/2015/263987 . Bamimore O, Silverberg MA. Acute epistaxis [Internet]. 2022. New York: Medscape. [cited 2022 Sept 12]. Available from: https://emedicine.medscape.com/article/764719-overview . You are the family physician working in a community health centre. A medical officer (MO) working in the paediatric clinic alongside primary health care (PHC) nurses commented that she has recently seen a few children with hearing loss as a complication of otitis media (OM). At the same time, it is noted in the Pharmaceuticals and Therapeutics Committee (PTC) meeting that there is an increased need for antimicrobial stewardship in the management of common upper respiratory tract infections (URTIs). As a leader of clinical governance in the clinic, what initial steps would you take to investigate this problem in the clinic? Describe three different approaches you might take. (6 marks) Based on your findings, you decide to do a quality improvement project (QIP) on one of your findings. Describe the process you would follow. Apply a relevant example to this process in line with one of your responses to question 1. (6 marks) You plan a continuing professional development (CPD) meeting to address the knowledge gap. List four important learning outcomes written in the correct format which address pertinent points in the management of OM in children. (4 marks) Acquired antibiotic resistance and antimicrobial stewardship raise several ethical dilemmas regarding public health when it comes to balancing harms and benefits. Over a million deaths per year are attributable to resistant bacterial infections. Describe two ethical dilemmas relevant to primary care practice that you will broach in your CPD meeting to raise awareness. (4 marks) Total: 20 marks Model answers 1. As a leader of clinical governance in the clinic, what initial steps would you take to investigate this problem in the clinic? Describe three different approaches you might take. (6 marks) (Provide any three approaches from the list below with a relevant example) File audit – Determine the current standard of care being provided and if this aligns with treatment guidelines. Also consider antibiotic stewardship, appropriate prescription of antibiotics, quality of note keeping and the number of children presenting with OM or URTI. Skills assessment and audit – Assess the competence of the staff who are new, and on an ongoing basis, assess the turnover of staff and provision of relevant supervision and training; note attendance at CPD meetings on the topic and observed consultations. Exploring problems in teams – apply root cause analysis methods, such as asking the 5 why’s, using the fishbone template and applying process mapping techniques. This may assist in understanding where breakdowns are occurring regarding health system factors or process issues, health care worker–related factors and patient factors. These may include problems with patient load, lack of access to functional equipment (otoscope), a gap in knowledge in treatment guidelines, poor examination technique and patient medication adherence. Explore learning needs and gaps – This may be on an individual level (doctors and PHC nurses), or it may be a priority and relevant for district health services and outcomes. Analyse and understand your intended audience and clarify their learning needs and gaps, which will in turn assist in developing learning objectives. Any other relevant response. 2. Based on your findings, you decide to do a quality improvement project (QIP) on one of your findings. Describe the process you would follow. Apply a relevant example to this process in line with one of your responses to question 1. (6 marks) The current situation has been explored in question 2.1. The next steps will be to: (Need to mention the step and elaborate with a relevant example for the mark) Form a relevant team (including PTC committee members) – For example, family physician, MO and PHC nurse from paediatric clinic, pharmacist and facility manager. Agree on problem definition, criteria and set target standards – Apply to one of the above examples. Identify gaps in current provision – Apply to one of the above examples above. Analyse causes and explore ways to improve the situation – Apply to one of the above examples above. Planning and implementing the change – Apply to one of the above examples above. Sustain change – Apply to one of the above examples above. The cycle continues until the desired quality is achieved. The criteria used and the performance levels can be adjusted if necessary before the start of a new cycle (as per the principle of continuous quality improvement [QI]). 3. You plan a continuing professional development (CPD) meeting to address the knowledge gap. List four important learning outcomes written in the correct format which address pertinent points in the management of OM in children. (4 marks) Background information (not part of the model answer) : In higher education today, teaching activities are not defined in terms of the content but rather in terms of the intended outcomes for the learners (see Bloom’s taxonomy). In other words, a learning outcome should specify what the learner should be able to do at the end of the teaching session. The learning outcome can be for knowledge, skills or attitudes, and the level of Bloom’s taxonomy should be clear from the verb used – list, describe, demonstrate. At the end of your teaching activity, you should be able to: Know or understand (cognitive domain: knowledge or application of knowledge in problem-solving or critical reflection) – Possible knowledge learning outcomes may relate to indications, contraindications, anatomy, equipment, drugs, fluids and aftercare. Be able to do (psychomotor domain: skills) – Possible learning outcomes related to skill refer to performing the procedure. Attitudes displayed (affective domain: values and attitudes) – Possible learning outcomes related to attitude may relate to communication, caring and consent. The content relating to the South African national guidelines for the management of upper respiratory tract infections should be expressed in the learning outcomes. The model answer should include any four options from the list below, preferably covering each domain: knowledge, skills and attitudes. At the end of this session, you should be able to list the common organisms that cause OM. At the of this session, you should be able to discuss the primary preventative measures that have reduced the incidence of OM in children. At the end of this session, you should be able to demonstrate the correct examination of the ear using pneumatic otoscopy and tympanometry. At the end of this session, you should be able to list the diagnostic criteria for acute OM. At the end of this session, you should be able to describe an approach to rational antibiotic prescribing for acute OM. At the end of this session, you should be able to list conditions under which antibiotics should be prescribed for acute OM and when a more conservative approach can be taken. At the end of this session, you should be able to demonstrate how you counsel a carer or parent on when management with antibiotics may be required and on the issue of antibiotic adherence. 4. Acquired antibiotic resistance and antimicrobial stewardship raise several ethical dilemmas regarding public health when it comes to balancing harms and benefits. Over a million deaths per year are attributable to resistant bacterial infections. Describe two ethical dilemmas relevant to primary care practice that you will broach in your CPD meeting to raise awareness. (4 marks) The model answer should include any two well-described points for 2 marks each. Primordial prevention and social determinants of health – Even when antibiotics are used scrupulously in individual patients, they can still acquire resistant organisms through no fault of their own from contact with infected or colonised people, animals and other environmental reservoirs. The medical fraternity should raise awareness and influence policy as a public health measure, including environmental and infection control policies. Distributive justice – Overuse of antibiotics in general practice may be because of a lack of evidence-based use by health practitioners, other incentives for health care workers or pressure from patients. Overuse in individuals may result in the depletion of a common resource for all. This requires regulation of human behaviour and may even require regulating access to a common resource for the greater good. Beneficence versus nonmaleficence – Antibiotic use is not a free ride; each use involves risk, and risk is more concentrated in the frequent user. Antibiotic consumption should require regulation. However, governance of antibiotic use through idealised prescription guidelines faces multiple real-world challenges – prescribers, agents and conflicts of interest. Clinicians may prioritise their immediate patients over the interest of other, distant or future patients. Antibiotics may also not be in the interest of the individual or the wider community. Further reading Brink AJ, Cotton MF, Feldman C, et al. Updated recommendations for the management of upper respiratory tract infections in South Africa. S. Afr. Med. J. 2015;105(5):345–52. Moodley K. Chapter 10.8: Family medicine ethics - the four principles of medical ethics. In: Mash B, editor. Handbook of Family Medicine. 4th ed. Cape Town: Oxford University Press, 2017; p. 418–422. (Provide any three approaches from the list below with a relevant example) File audit – Determine the current standard of care being provided and if this aligns with treatment guidelines. Also consider antibiotic stewardship, appropriate prescription of antibiotics, quality of note keeping and the number of children presenting with OM or URTI. Skills assessment and audit – Assess the competence of the staff who are new, and on an ongoing basis, assess the turnover of staff and provision of relevant supervision and training; note attendance at CPD meetings on the topic and observed consultations. Exploring problems in teams – apply root cause analysis methods, such as asking the 5 why’s, using the fishbone template and applying process mapping techniques. This may assist in understanding where breakdowns are occurring regarding health system factors or process issues, health care worker–related factors and patient factors. These may include problems with patient load, lack of access to functional equipment (otoscope), a gap in knowledge in treatment guidelines, poor examination technique and patient medication adherence. Explore learning needs and gaps – This may be on an individual level (doctors and PHC nurses), or it may be a priority and relevant for district health services and outcomes. Analyse and understand your intended audience and clarify their learning needs and gaps, which will in turn assist in developing learning objectives. Any other relevant response. The current situation has been explored in question 2.1. The next steps will be to: (Need to mention the step and elaborate with a relevant example for the mark) Form a relevant team (including PTC committee members) – For example, family physician, MO and PHC nurse from paediatric clinic, pharmacist and facility manager. Agree on problem definition, criteria and set target standards – Apply to one of the above examples. Identify gaps in current provision – Apply to one of the above examples above. Analyse causes and explore ways to improve the situation – Apply to one of the above examples above. Planning and implementing the change – Apply to one of the above examples above. Sustain change – Apply to one of the above examples above. The cycle continues until the desired quality is achieved. The criteria used and the performance levels can be adjusted if necessary before the start of a new cycle (as per the principle of continuous quality improvement [QI]). Background information (not part of the model answer) : In higher education today, teaching activities are not defined in terms of the content but rather in terms of the intended outcomes for the learners (see Bloom’s taxonomy). In other words, a learning outcome should specify what the learner should be able to do at the end of the teaching session. The learning outcome can be for knowledge, skills or attitudes, and the level of Bloom’s taxonomy should be clear from the verb used – list, describe, demonstrate. At the end of your teaching activity, you should be able to: Know or understand (cognitive domain: knowledge or application of knowledge in problem-solving or critical reflection) – Possible knowledge learning outcomes may relate to indications, contraindications, anatomy, equipment, drugs, fluids and aftercare. Be able to do (psychomotor domain: skills) – Possible learning outcomes related to skill refer to performing the procedure. Attitudes displayed (affective domain: values and attitudes) – Possible learning outcomes related to attitude may relate to communication, caring and consent. The content relating to the South African national guidelines for the management of upper respiratory tract infections should be expressed in the learning outcomes. The model answer should include any four options from the list below, preferably covering each domain: knowledge, skills and attitudes. At the end of this session, you should be able to list the common organisms that cause OM. At the of this session, you should be able to discuss the primary preventative measures that have reduced the incidence of OM in children. At the end of this session, you should be able to demonstrate the correct examination of the ear using pneumatic otoscopy and tympanometry. At the end of this session, you should be able to list the diagnostic criteria for acute OM. At the end of this session, you should be able to describe an approach to rational antibiotic prescribing for acute OM. At the end of this session, you should be able to list conditions under which antibiotics should be prescribed for acute OM and when a more conservative approach can be taken. At the end of this session, you should be able to demonstrate how you counsel a carer or parent on when management with antibiotics may be required and on the issue of antibiotic adherence. The model answer should include any two well-described points for 2 marks each. Primordial prevention and social determinants of health – Even when antibiotics are used scrupulously in individual patients, they can still acquire resistant organisms through no fault of their own from contact with infected or colonised people, animals and other environmental reservoirs. The medical fraternity should raise awareness and influence policy as a public health measure, including environmental and infection control policies. Distributive justice – Overuse of antibiotics in general practice may be because of a lack of evidence-based use by health practitioners, other incentives for health care workers or pressure from patients. Overuse in individuals may result in the depletion of a common resource for all. This requires regulation of human behaviour and may even require regulating access to a common resource for the greater good. Beneficence versus nonmaleficence – Antibiotic use is not a free ride; each use involves risk, and risk is more concentrated in the frequent user. Antibiotic consumption should require regulation. However, governance of antibiotic use through idealised prescription guidelines faces multiple real-world challenges – prescribers, agents and conflicts of interest. Clinicians may prioritise their immediate patients over the interest of other, distant or future patients. Antibiotics may also not be in the interest of the individual or the wider community. Further reading Brink AJ, Cotton MF, Feldman C, et al. Updated recommendations for the management of upper respiratory tract infections in South Africa. S. Afr. Med. J. 2015;105(5):345–52. Moodley K. Chapter 10.8: Family medicine ethics - the four principles of medical ethics. In: Mash B, editor. Handbook of Family Medicine. 4th ed. Cape Town: Oxford University Press, 2017; p. 418–422. Read the accompanying article carefully and then answer the following questions. As far as possible use your own words. Do not copy out chunks from the article. Be guided by the allocation of marks concerning the length of your responses. Biagio L, Swanepoel DW, Laurent C, Lundberg T. Paediatric otitis media at a primary healthcare clinic in South Africa. S. Afr. Med. J. 2014;104(6):431–5. Total: 30 marks Did the study address a focused question? Discuss. (3 marks) Identify three arguments the author made to justify and provide a rationale for the study. (3 marks) Explain why a quantitative research methodology may be most appropriate for this research question. Comment on where and how a qualitative data collection methodology might still be applicable. (2 marks) Critically appraise the sampling strategy. (5 marks) Critically appraise how well the authors describe the data collection process. (5 marks) Explain the difference between point prevalence and incidence. (2 marks) Critically appraise the analysis and conclusions of the study. (4 marks) Use a structured approach (e.g. relevance, education, applicability, discrimination, evaluation, reaction [READER]) to discuss the value of these findings to your practice. (6 marks) Model answers 1. Did the study address a focused question? Discuss. (3 marks) The authors aimed to measure the prevalence of otitis media in a South African primary health care (PHC) clinic, Witkoppen Health and Welfare Centre. The question is focused as it describes the population of interest (paediatric population attending a PHC clinic) and the condition or phenomenon of interest (point prevalence of otitis media in this population) in a particular community or area (the Diepsloot community north of Johannesburg, South Africa). The authors wished to diagnose the condition of interest with greater sensitivity and specificity than either otoscopy or pneumatic otoscopy, by using otomicroscopy to diagnose and classify otitis media as a cause of middle-ear pathology in children. 2. Identify three arguments the author made to justify and provide a rationale for the study. (3 marks) Otitis media point prevalence in South Africa has never been measured, and most deaths from complications of otitis media are in sub-Saharan Africa and India. Chronic serous otitis media is also the most common cause of hearing impairment. This makes the study socially and scientifically relevant. Most studies of the prevalence of otitis media measure prevalence in children of school-going age and not in younger preschool children, who are more prone to otitis media. Early medical intervention is indicated in communities where chronic suppurative otitis media rates are more than 4%, as this constitutes a high-risk population. This supports the need to employ diagnostic methods to measure the point prevalence more accurately. 3. Explain why a quantitative research methodology may be most appropriate for this research question. Comment on where and how a qualitative data collection methodology might still be applicable. (2 marks) By definition, prevalence is a quantitative measure of proportion and depicts the proportion of a defined population with a disease or illness at a specified time. Therefore, measuring a proportion would require a quantitative methodology and is impossible to achieve using qualitative data and methods. Given that otomicroscopy was used for the first time in this setting, the study could conceivably be amended to address the additional objective of assessing the otologist’s experiences of otomicroscopy in primary health care. Perhaps the caregiver who brought the children would be interviewed for qualitative data on their experience of the process. 4. Critically appraise the sampling strategy. (5 marks) The researchers selected a specific primary healthcare clinic for their study. The clinic is a specialist care centre for primary health care paediatric human immunodeficiency syndrome (HIV) and tuberculosis (TB) patients. This also already indicates that it does not represent the more typical primary health care clinics in the country, which serve patients with all forms of illness. The more accurate description in the title of this study should be that of measuring the prevalence of OM in an HIV and TB primary healthcare clinic. Furthermore, the sampling was not random but consecutive. They recruited 140 children aged 2–16 years as a sample from registered clinic patients known to the service: the participants were recruited from the entire paediatric population attending the clinic for any purpose, whether for a routine clinic appointment or for chronic or acute treatment. They do not indicate on which days they consecutively collected samples and whether they sampled equally for each day of the week. They only specified that the on-site data collection continued over the course of 2 weeks. Bias could be introduced in this way of sampling if, for example, a specific type of child (age or illness) tends to come to the clinic on some days more than others. The researchers do not indicate how they calculated the sample size. This always affects the precision of the estimate of prevalence. Often, it is helpful to use prevalence rates from the literature to calculate sample size estimates. 5. Critically appraise how well the authors describe the data collection process. (5 marks) The authors described the collection of demographic data under the study population subheading in the methods section and not under the data collection subheading. It would have made more sense to include this data collection step in the data collection subsection, as this information was included in the data set. The authors did not specify who collected this information, and it seems like this information might have been captured by a research assistant or the specialist otologist, linked to the informed consent process and possibly the otomicroscopy assessment. It is important to note the person(s) who collected the data from the patients and parents or caregivers as well as the background of the data collectors. It was not clear whether the clinical notes and medical history from the patient’s folder were consulted to complement the dataset and verify the accuracy of comorbid risk factors described in the introduction section (host-related and environmental factors). It would have been useful to present the demographic and medical background data collection instrument as a supplement. Interestingly, even though this clinic served as a specialist HIV and TB centre, the researchers were not able to collect clinical data on HIV status. They mentioned that ‘ethical clearance did not allow for this’ but do not specify the reasons behind this (whether it was a protocol design flaw or whether this was a specification from the ethics review board). The data collection subsection in the methods section describes the technical process of otomicroscopy, including the type of device used (a Leica M525 F40 surgical microscope). The key elements captured by the specialist otologist are described, as well as the diagnostic criteria and types of otitis media classification. It is not clear if only a single specialist otologist performed the technical evaluations over the 2-week period or if more than one observer was involved. This may have resulted in interobserver bias. Intra-observer bias may also have been possible given the workload of assessing 136 participants. It would have been interesting to know if this microscope allowed for digital photography to facilitate external review by an independent expert observer. It was also not clear if cerumen removal was done consistently by a single operator (the results section mentioned that cerumen was removed manually and was halted in the event of any discomfort). Finally, it was not clear if the technical device required calibration during the fieldwork process; usually, a device used to take repeated measures of several participants over a short span of time requires a calibration protocol to ensure consistency and accuracy. 6. Explain the difference between point prevalence and incidence. (2 marks) The two measurements can complement each other and provide a full picture but are often confused. The incidence is a measure of the rate at which new cases of disease appear over a time period, whereas the prevalence is the total number of cases of a disease at or during a specific point in time. It is often referred to as a ‘photograph or snapshot’ of a point in time (point prevalence). Prevalence describes the proportion of the population with a specific characteristic, regardless of when they first developed the characteristic. This means that prevalence includes both new and pre-existing cases, whereas incidence is limited to new cases only. 7. Critically appraise the analysis and conclusions of the study. (4 marks) The authors calculated the prevalence of otitis media appropriately and used well-defined otomicroscopic definitions for the different diagnoses. However, they proceeded to compare prevalence rates between two different age groups using Pearson’s χ2 (chi-squared) test but did not indicate that this comparison will be done in their original objectives. They also did not indicate that their sample size calculation anticipated an analytical component to their study and not just a descriptive point prevalence. The authors did find a statistically significant finding during this comparative analysis that otomicroscopy-confirmed otitis media was more prevalent in the younger group of participants (preschool) compared with the older group of participants (school-going age). The subtypes of diagnosed otitis media confirmed that otitis media with effusion (OME) was more frequently diagnosed in the younger group, while the most severe form of otitis media, chronic suppurative otitis media (CSOM), was more common in the older group. The prevalence of CSOM for the total study sample was 6.6%, which constitutes a high-risk population. The CSOM prevalence in the older group was even higher at 9.3%, which is rated as the highest prevalence based on the World Health Organization (WHO) classification system cited by the authors. The authors admitted to several study design limitations, including the sample size and the lack of information on comorbid medical conditions such as HIV and TB status, as well as host-related and environmental factors, including nutritional status. Although the authors concur that the HIV prevalence of the population could likely contribute to the higher prevalence of otitis media, they still problematically proceed to engage with the findings as if they represent the larger population of children in primary health care settings. This is most starkly noted in their conclusion, where the HIV positivity of the children in the study is omitted. 8. Use a structured approach (e.g. relevance, education, applicability, discrimination, evaluation, reaction [READER]) to discuss the value of these findings to your practice (6 marks) The READER format may be used to answer this question: Relevance – Is it relevant to family medicine and primary care? Education – Does it challenge existing knowledge or thinking? Applicability – Are the results applicable to my practice? Discrimination – Is the study scientifically valid enough? Evaluation – Given the above, how would I score or evaluate the usefulness of this study to my practice? Reaction – What will I do with the study findings? The answer may be a subjective response but should be one that demonstrates a reflection on the possible changes within the student’s practice within the South African public health care system. It is acceptable for the student to suggest how their practice might change within other scenarios after graduation (e.g. private general practice). The reflection on whether all important outcomes were considered is therefore dependent on the reader’s perspective (is there other information you would have liked to see?). A model answer could be written from the perspective of the family physician employed in the South African district health system: R: This study is relevant to the African primary care context, as children presenting to PHC facilities with otitis media are a common phenomenon, and there is a need to diagnose complicated otitis media such as OME and CSOM early to prevent complications. E: The authors made the case that this is the first otitis media prevalence study in a PHC setting in South Africa, especially given their use of the enhanced diagnostic instrument, the otomicroscope operated by a specialist otologist. The study’s novelty is limited by several design flaws, however. A: It is not possible to generalise the study findings to the wider South African setting, as the study was conducted in a specialist HIV and TB PHC facility using a small sample with a nonprobability sampling method (consecutive sampling). D: In terms of discrimination, the concern lies in the study design as mentioned above (small sample and sampling method). The diagnostic accuracy is noted as the authors employed a superior diagnostic technique with clearly focused and defined diagnostic criteria. The data collection process and risk for bias are not adequately presented in the methods section. Using a reporting guideline such as the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist for observational studies would have enabled the reader to make a better judgement in terms of assessment of internal validity. E: The study findings may be relevant to consider when planning coordination of care for children in a similar PHC facility. It is important to consider the presence of complicated otitis media in children, especially those with comorbid conditions. It is also important to note the low incidence of reported symptoms in the 2 weeks prior to otomicroscopy. However, given the concerns described above regarding the study design and reporting, the findings are not generalisable to the typical South African PHC facility setting. R: The study findings are limited by the study setting and design flaws. However, this does not detract from the need to ensure appropriate care for children at risk for complicated otitis media. This would include increasing and augmenting routine screening services with specialised otomicroscopy services where feasible. More research in typical PHC settings with larger samples and more comprehensive data collection tools is warranted to strengthen the case made by the authors. Further reading Pather M. Evidence-based Family Medicine. In: Mash B, editor. Handbook of Family Medicine. 4th ed. Cape Town: Oxford University Press, 2017; 430–453. Riegelman RK. Studying a Study and testing a test. How to read the medical evidence. 5th ed. Lippincott Williams & Wilkins; 2005. MacAuley D. READER: An acronym to aid critical reading by general practitioners. BR J Gen Pract. 1994;44(379):83–5. Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, Strobe Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann. Intern. Med. 2007;147(8):573– 577. [cited 2022 Sept 19]. Available from: https://www.equator-network.org/reporting-guidelines/strobe/ The Critical Appraisals Skills Programme (CASP). CASP checklists. [online] 2022. [cited 2022 Sept 19]. Available from: https://casp-uk.net/casp-tools-checklists/ The authors aimed to measure the prevalence of otitis media in a South African primary health care (PHC) clinic, Witkoppen Health and Welfare Centre. The question is focused as it describes the population of interest (paediatric population attending a PHC clinic) and the condition or phenomenon of interest (point prevalence of otitis media in this population) in a particular community or area (the Diepsloot community north of Johannesburg, South Africa). The authors wished to diagnose the condition of interest with greater sensitivity and specificity than either otoscopy or pneumatic otoscopy, by using otomicroscopy to diagnose and classify otitis media as a cause of middle-ear pathology in children. Otitis media point prevalence in South Africa has never been measured, and most deaths from complications of otitis media are in sub-Saharan Africa and India. Chronic serous otitis media is also the most common cause of hearing impairment. This makes the study socially and scientifically relevant. Most studies of the prevalence of otitis media measure prevalence in children of school-going age and not in younger preschool children, who are more prone to otitis media. Early medical intervention is indicated in communities where chronic suppurative otitis media rates are more than 4%, as this constitutes a high-risk population. This supports the need to employ diagnostic methods to measure the point prevalence more accurately. By definition, prevalence is a quantitative measure of proportion and depicts the proportion of a defined population with a disease or illness at a specified time. Therefore, measuring a proportion would require a quantitative methodology and is impossible to achieve using qualitative data and methods. Given that otomicroscopy was used for the first time in this setting, the study could conceivably be amended to address the additional objective of assessing the otologist’s experiences of otomicroscopy in primary health care. Perhaps the caregiver who brought the children would be interviewed for qualitative data on their experience of the process. The researchers selected a specific primary healthcare clinic for their study. The clinic is a specialist care centre for primary health care paediatric human immunodeficiency syndrome (HIV) and tuberculosis (TB) patients. This also already indicates that it does not represent the more typical primary health care clinics in the country, which serve patients with all forms of illness. The more accurate description in the title of this study should be that of measuring the prevalence of OM in an HIV and TB primary healthcare clinic. Furthermore, the sampling was not random but consecutive. They recruited 140 children aged 2–16 years as a sample from registered clinic patients known to the service: the participants were recruited from the entire paediatric population attending the clinic for any purpose, whether for a routine clinic appointment or for chronic or acute treatment. They do not indicate on which days they consecutively collected samples and whether they sampled equally for each day of the week. They only specified that the on-site data collection continued over the course of 2 weeks. Bias could be introduced in this way of sampling if, for example, a specific type of child (age or illness) tends to come to the clinic on some days more than others. The researchers do not indicate how they calculated the sample size. This always affects the precision of the estimate of prevalence. Often, it is helpful to use prevalence rates from the literature to calculate sample size estimates. The authors described the collection of demographic data under the study population subheading in the methods section and not under the data collection subheading. It would have made more sense to include this data collection step in the data collection subsection, as this information was included in the data set. The authors did not specify who collected this information, and it seems like this information might have been captured by a research assistant or the specialist otologist, linked to the informed consent process and possibly the otomicroscopy assessment. It is important to note the person(s) who collected the data from the patients and parents or caregivers as well as the background of the data collectors. It was not clear whether the clinical notes and medical history from the patient’s folder were consulted to complement the dataset and verify the accuracy of comorbid risk factors described in the introduction section (host-related and environmental factors). It would have been useful to present the demographic and medical background data collection instrument as a supplement. Interestingly, even though this clinic served as a specialist HIV and TB centre, the researchers were not able to collect clinical data on HIV status. They mentioned that ‘ethical clearance did not allow for this’ but do not specify the reasons behind this (whether it was a protocol design flaw or whether this was a specification from the ethics review board). The data collection subsection in the methods section describes the technical process of otomicroscopy, including the type of device used (a Leica M525 F40 surgical microscope). The key elements captured by the specialist otologist are described, as well as the diagnostic criteria and types of otitis media classification. It is not clear if only a single specialist otologist performed the technical evaluations over the 2-week period or if more than one observer was involved. This may have resulted in interobserver bias. Intra-observer bias may also have been possible given the workload of assessing 136 participants. It would have been interesting to know if this microscope allowed for digital photography to facilitate external review by an independent expert observer. It was also not clear if cerumen removal was done consistently by a single operator (the results section mentioned that cerumen was removed manually and was halted in the event of any discomfort). Finally, it was not clear if the technical device required calibration during the fieldwork process; usually, a device used to take repeated measures of several participants over a short span of time requires a calibration protocol to ensure consistency and accuracy. The two measurements can complement each other and provide a full picture but are often confused. The incidence is a measure of the rate at which new cases of disease appear over a time period, whereas the prevalence is the total number of cases of a disease at or during a specific point in time. It is often referred to as a ‘photograph or snapshot’ of a point in time (point prevalence). Prevalence describes the proportion of the population with a specific characteristic, regardless of when they first developed the characteristic. This means that prevalence includes both new and pre-existing cases, whereas incidence is limited to new cases only. The authors calculated the prevalence of otitis media appropriately and used well-defined otomicroscopic definitions for the different diagnoses. However, they proceeded to compare prevalence rates between two different age groups using Pearson’s χ2 (chi-squared) test but did not indicate that this comparison will be done in their original objectives. They also did not indicate that their sample size calculation anticipated an analytical component to their study and not just a descriptive point prevalence. The authors did find a statistically significant finding during this comparative analysis that otomicroscopy-confirmed otitis media was more prevalent in the younger group of participants (preschool) compared with the older group of participants (school-going age). The subtypes of diagnosed otitis media confirmed that otitis media with effusion (OME) was more frequently diagnosed in the younger group, while the most severe form of otitis media, chronic suppurative otitis media (CSOM), was more common in the older group. The prevalence of CSOM for the total study sample was 6.6%, which constitutes a high-risk population. The CSOM prevalence in the older group was even higher at 9.3%, which is rated as the highest prevalence based on the World Health Organization (WHO) classification system cited by the authors. The authors admitted to several study design limitations, including the sample size and the lack of information on comorbid medical conditions such as HIV and TB status, as well as host-related and environmental factors, including nutritional status. Although the authors concur that the HIV prevalence of the population could likely contribute to the higher prevalence of otitis media, they still problematically proceed to engage with the findings as if they represent the larger population of children in primary health care settings. This is most starkly noted in their conclusion, where the HIV positivity of the children in the study is omitted. The READER format may be used to answer this question: Relevance – Is it relevant to family medicine and primary care? Education – Does it challenge existing knowledge or thinking? Applicability – Are the results applicable to my practice? Discrimination – Is the study scientifically valid enough? Evaluation – Given the above, how would I score or evaluate the usefulness of this study to my practice? Reaction – What will I do with the study findings? The answer may be a subjective response but should be one that demonstrates a reflection on the possible changes within the student’s practice within the South African public health care system. It is acceptable for the student to suggest how their practice might change within other scenarios after graduation (e.g. private general practice). The reflection on whether all important outcomes were considered is therefore dependent on the reader’s perspective (is there other information you would have liked to see?). A model answer could be written from the perspective of the family physician employed in the South African district health system: R: This study is relevant to the African primary care context, as children presenting to PHC facilities with otitis media are a common phenomenon, and there is a need to diagnose complicated otitis media such as OME and CSOM early to prevent complications. E: The authors made the case that this is the first otitis media prevalence study in a PHC setting in South Africa, especially given their use of the enhanced diagnostic instrument, the otomicroscope operated by a specialist otologist. The study’s novelty is limited by several design flaws, however. A: It is not possible to generalise the study findings to the wider South African setting, as the study was conducted in a specialist HIV and TB PHC facility using a small sample with a nonprobability sampling method (consecutive sampling). D: In terms of discrimination, the concern lies in the study design as mentioned above (small sample and sampling method). The diagnostic accuracy is noted as the authors employed a superior diagnostic technique with clearly focused and defined diagnostic criteria. The data collection process and risk for bias are not adequately presented in the methods section. Using a reporting guideline such as the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist for observational studies would have enabled the reader to make a better judgement in terms of assessment of internal validity. E: The study findings may be relevant to consider when planning coordination of care for children in a similar PHC facility. It is important to consider the presence of complicated otitis media in children, especially those with comorbid conditions. It is also important to note the low incidence of reported symptoms in the 2 weeks prior to otomicroscopy. However, given the concerns described above regarding the study design and reporting, the findings are not generalisable to the typical South African PHC facility setting. R: The study findings are limited by the study setting and design flaws. However, this does not detract from the need to ensure appropriate care for children at risk for complicated otitis media. This would include increasing and augmenting routine screening services with specialised otomicroscopy services where feasible. More research in typical PHC settings with larger samples and more comprehensive data collection tools is warranted to strengthen the case made by the authors. Further reading Pather M. Evidence-based Family Medicine. In: Mash B, editor. Handbook of Family Medicine. 4th ed. Cape Town: Oxford University Press, 2017; 430–453. Riegelman RK. Studying a Study and testing a test. How to read the medical evidence. 5th ed. Lippincott Williams & Wilkins; 2005. MacAuley D. READER: An acronym to aid critical reading by general practitioners. BR J Gen Pract. 1994;44(379):83–5. Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, Strobe Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann. Intern. Med. 2007;147(8):573– 577. [cited 2022 Sept 19]. Available from: https://www.equator-network.org/reporting-guidelines/strobe/ The Critical Appraisals Skills Programme (CASP). CASP checklists. [online] 2022. [cited 2022 Sept 19]. Available from: https://casp-uk.net/casp-tools-checklists/ Objective This station tests the candidate’s ability to manage a patient with persistent dizziness. Type of station Integrated consultation. Role player Simulated patient: male or female adult. Instructions to the candidate You are the family physician working at a community health centre. The medical officer asked you to see a patient with persistent dizziness, who presented to the emergency room. Your task: please consult with this patient and develop a comprehensive management plan. You do not need to examine this patient. All examination findings will be provided on request. This is an integrated consultation station in which the candidate has 15 minutes. Familiarise yourself with the assessor guidelines that detail the required responses expected from the candidate. No marks are allocated. In the marks sheet , tick off one of the three responses for each of the competencies listed. Make sure you are clear on what the criteria are for judging a candidate’s competence in each area. Provide the following information to the candidate when requested: examination findings. Please switch off your cell phone. Please do not prompt the student. Please ensure that the station remains tidy and is reset between candidates. Guidelines to examiner The aim is to establish that the candidate can diagnose vertigo, identify possible causes (cerebellar stroke with underlying hypercholesterolaemia) and develop an effective and safe management plan. Working definition of competent performance: the candidate effectively completes the task within the allotted time, in a manner that maintains patient safety, even though the execution may not be efficient and well structured. Not competent: patient safety is compromised (including ethical-legally) or task is not completed. Competent: the task is completed safely and effectively. Good: in addition to displaying competence, the task is completed efficiently and in an empathic, patient-centred manner (acknowledges patient’s ideas, beliefs, expectations, concerns or fears). Establishes and maintains a good clinician–patient relationship The competent candidate is respectful, engaging with the patient in a dignified manner. The good candidate is empathic, compassionate and collaborative, facilitating patient participation in key areas of the consultation. Gathering information The competent candidate gathers sufficient information to establish a diagnosis ( acute vertigo, asks questions aimed at localising the problem and enquires about some psychosocial related to the problem ) The good candidate additionally has a structured and holistic approach ( enquiring about the causes of vertigo and assessing the impact on the emotional, social and occupational aspects of the patient’s life ). Clinical reasoning The competent candidate identifies the diagnosis ( acute vertigo due to a central cause, impacting the patient’s work performance as a bus driver ). The good candidate makes a specific diagnosis ( acute vertigo, likely due to a cerebellar stroke, with underlying possible familial hypercholesterolaemia, with major long-term occupational implications ). Explaining and planning The competent candidate uses clear language to explain the problem to the patient and uses strategies to ensure patient understanding ( questions OR feedback OR reverse summarising ). The good candidate additionally ensures that the patient is actively involved in decision-making, paying particular attention to knowledge-sharing and empowerment. Management The competent candidate makes arrangements for urgent referral to a specialist physician or neurologist for further investigations ( computerised tomography [CT] scan or magnetic resonance imaging [MRI]) as an inpatient. The good candidate additionally addresses psychosocial issues comprehensively and may start the process of a follow-up plan being in place when the patient returns from the hospital. Examination findings Body mass index – 24 kg/m 2 Blood pressure – 138/94 mmHg, heart rate: 104 beats/min Haemoglobin – 13.5 gm/dL Random blood glucose (HGT) – 5.9 mmol/L Urinalysis – No abnormalities Ears – Normal hearing bilaterally; no abnormalities on visual inspection, including otoscope; Dix-Hallpike manoeuvre negative. Eyes – Xanthelasma on both eyelids; nystagmus on lateral gaze; normal vision, specifically no diplopia. Cardio-respiratory systems – No abnormalities. Abdomen – No abnormalities. Neuro – Marked ataxic gait; fine tremor at rest: unable to write own name; power, reflexes and sensation intact in all limbs. Appearance and behaviour Male or female adult, calm, 40–50 years old. Opening statement ‘Hello, Doctor. I’m having this dizziness all the time, since yesterday, and feeling nauseous.’ History Open responses: Freely tell the doctor ■ You were feeling very well yesterday morning. Around lunchtime, you suddenly started getting dizzy and vomited twice. You had to leave work, then slept at home until this morning, but it is not better. Closed responses: Only tell the doctor if asked ■ It feels like the room is spinning around you. Makes it difficult to walk. Not worsened by any specific positions. ■ Nauseous all the time, especially when you are moving. ■ You have no funny ringing noises or deafness in any of your ears. Your medical history ■ Diagnosed with high cholesterol at the age of 34 years. Did not want to use medication – just eating healthily and exercising occasionally. Cholesterol is a family problem; your brother and mother also have cholesterol problems, but you are unsure if they take medication. ■ You do not smoke, drink alcohol very little and exercise by walking once a week. Ideas, concerns and expectations ■ Your major concern is to get rid of this dizziness. ■ It affects your work as a bus driver. Further reading Department of Health. Acute Vertigo. In: Standard Treatment Guidelines, Adult Hospital level. Pretoria: Department of Health; 2019.
Majority of new patient referrals to a large pediatric rheumatology center result in non-rheumatic diagnosis
f3eaebcd-e460-402b-a9ec-b065ecba9703
10571278
Internal Medicine[mh]
Since its emergence as a distinct pediatric subspecialty in the 1970s, pediatric rheumatology has become crucial in the management of children with complex and life-threatening diseases associated with organ and connective tissue inflammation . More recently, we have seen novel immunomodulatory therapies, targeted genetic testing, and expansion of international patient registries improve diagnosis, treatment, and outcomes for children with rheumatic disease. However, despite these advancements, a simultaneous contraction of the United States pediatric rheumatology workforce and increased demand for rheumatology evaluation threaten to overwhelm the system. The 2015 American College of Rheumatology Workforce Study projected a significant increase in the supply-demand gap for pediatric rheumatology care over the next 10–20 years due to many factors, including an aging pediatric rheumatology workforce, few fellow graduates, expansion of the overall pediatric population, and concentration of providers in academic centers . While strategies have been proposed to address the supply shortfall, there is limited data looking into the demand for rheumatic care at the level of individual centers (Correll ACR). The three most recent analyses of individual center and small collections of pediatric rheumatology clinic populations were reported in 1994, 1996, and 2005. In 1994, Denardo et al. prospectively enrolled 4585 new pediatric rheumatology patients from eight clinics in southern New England over an 8-year period, reporting their diagnoses and incidence of rheumatic disease . Then in 1996, Bowyer and Roettcher published on the diagnoses of a larger cohort of 12,939 pediatric rheumatology patients from 25 clinics over a 3-year period (1992–1995) from across the United States . Lastly, in 2005, Rosenberg reported on diagnoses and disease frequencies of 3269 patients referred to the Pediatric Rheumatology Clinic at the University of Saskatchewan over a 23-year period (1981–2004) . Twenty years later, we aim to add to this knowledge by analyzing three years of new patient visits to a large tertiary care pediatric rheumatology center in order to identify emerging trends in referrals and areas for potential intervention to meet increased demand. Subjects and referral process The study population includes all patients referred to and seen by the University of Alabama at Birmingham Pediatric Rheumatology Division between January 2019 and December 2021 for a new patient evaluation. All care was provided at Children’s of Alabama and associated satellite locations within the state. In alignment with department policy, all patients under the age of 18 referred for rheumatology evaluation were offered an appointment, regardless of suspicion for rheumatic disease during the referral triage process. Referrals come from providers within the Children’s of Alabama system, community advanced practice providers and pediatricians, and from surrounding states in the American Southeast. All referrals were reviewed by pediatric rheumatologists within the division upon receipt for determination of acuity. Referrals that did not result in an attended appointment, including cancellations and “no-shows”, were excluded from analysis, as an accurate determination of diagnosis was unable to be reached. Patients initially evaluated as inpatient consults, but subsequently followed in rheumatology clinic, were also excluded. Methods and determination of diagnosis De-identified patient data was retrospectively abstracted from the electronic medical record system for the observable time between January 2019 and December 2021. Variables collected for each new patient included initial referral reason as per the referring provider, referral date, first appointment date, attended follow-up appointments, and final diagnosis. Diagnoses were assigned to a disease category via generally accepted rheumatic classification criteria or diagnostic assessments. If patients had their diagnosis changed at any point during their care, the final diagnosis or most recent diagnosis at the time of data abstraction was used in this analysis. A patient’s diagnosis was classified as a “rheumatic disease” if it requires chronic management primarily by or in conjunction with a pediatric rheumatologist. During the study period, one of six different pediatric rheumatologists primarily managed each patient, with assistance from nurse practitioners and fellows-in-training. Data abstraction and analysis was undertaken as a Quality Improvement initiative within the University of Alabama at Birmingham Pediatric Rheumatology Division, with the goal to improve the appointment referral process and decrease appointment wait times. Given the specificity of the data to our individual center, the patient data used does not contribute to generalizable knowledge and this project therefore does not meet the formal definition of research per the US Department of Health and Human Services and was not formally supervised by the Institutional Review Board per policy. Analysis and calculations were performed with Microsoft Excel. Data was presumed to be non-normal in its distribution, so continuious variables were expressed in terms of median and interquartile ranges (IQR). The study population includes all patients referred to and seen by the University of Alabama at Birmingham Pediatric Rheumatology Division between January 2019 and December 2021 for a new patient evaluation. All care was provided at Children’s of Alabama and associated satellite locations within the state. In alignment with department policy, all patients under the age of 18 referred for rheumatology evaluation were offered an appointment, regardless of suspicion for rheumatic disease during the referral triage process. Referrals come from providers within the Children’s of Alabama system, community advanced practice providers and pediatricians, and from surrounding states in the American Southeast. All referrals were reviewed by pediatric rheumatologists within the division upon receipt for determination of acuity. Referrals that did not result in an attended appointment, including cancellations and “no-shows”, were excluded from analysis, as an accurate determination of diagnosis was unable to be reached. Patients initially evaluated as inpatient consults, but subsequently followed in rheumatology clinic, were also excluded. De-identified patient data was retrospectively abstracted from the electronic medical record system for the observable time between January 2019 and December 2021. Variables collected for each new patient included initial referral reason as per the referring provider, referral date, first appointment date, attended follow-up appointments, and final diagnosis. Diagnoses were assigned to a disease category via generally accepted rheumatic classification criteria or diagnostic assessments. If patients had their diagnosis changed at any point during their care, the final diagnosis or most recent diagnosis at the time of data abstraction was used in this analysis. A patient’s diagnosis was classified as a “rheumatic disease” if it requires chronic management primarily by or in conjunction with a pediatric rheumatologist. During the study period, one of six different pediatric rheumatologists primarily managed each patient, with assistance from nurse practitioners and fellows-in-training. Data abstraction and analysis was undertaken as a Quality Improvement initiative within the University of Alabama at Birmingham Pediatric Rheumatology Division, with the goal to improve the appointment referral process and decrease appointment wait times. Given the specificity of the data to our individual center, the patient data used does not contribute to generalizable knowledge and this project therefore does not meet the formal definition of research per the US Department of Health and Human Services and was not formally supervised by the Institutional Review Board per policy. Analysis and calculations were performed with Microsoft Excel. Data was presumed to be non-normal in its distribution, so continuious variables were expressed in terms of median and interquartile ranges (IQR). Between January 2019 and December 2021, 2638 patients were referred to and seen by our pediatric rheumatology clinic. Of these patients, 610 (23.1%) were eventually diagnosed with a rheumatic condition (Table ). After their initial evaluation, only 33% of new patients were seen for a follow-up appointment, including 82.8% of patients with rheumatic diagnoses and 18.0% of non-rheumatic conditions (Table ). On a month-to-month basis, excluding February 2020 through May 2020 when clinic was significantly limited during the onset of the coronavirus disease 2019 (COVID-19) pandemic, appointments ranged from 52 to 137 new patients seen monthly with a median of 79 new patients per month (IQR 68–86) (Fig. ). The number of new rheumatic disease diagnoses ranged from 11 to 26 monthly (median 18, IQR 14–21) and non-rheumatic diagnoses ranged from 36 to 116 (median 58, IQR 50–68). The median proportion of patients seen with a rheumatic diagnosis was 22.2% of patients per month, consistent with the overall proportion of 23.1% throughout the study period. Of the 610 patients diagnosed with a rheumatic condition during the study period, the most common diagnosis was juvenile idiopathic arthritis (JIA) at 45.6% of diagnoses (Table ). Oligoarticular JIA was the most prevalent subtype comprising 33.5% of JIA diagnoses, followed by enthesitis-related JIA (19.8%), psoriatic JIA (17.3%), and rheumatoid factor negative polyarticular JIA (15.1%). No other diagnosis group comprised greater than 10% of the population. The next most common diagnoses included primary Raynaud phenomenon (7.4%), recurrent fever syndromes (6.9%), vasculitides such as ANCA-associated vasculitis, Henoch-Schönlein purpura, and Kawasaki disease follow-up (6.7%), and inflammatory eye disease including uveitis (6.2%). Other diagnosis groups made up less than 5% of the total rheumatic disease population. The median time from referral to appointment for patients with a rheumatic disease diagnosis was 13.8 days (IQR 4.9–46.0), with all individual diagnosis wait times (except Raynauds phenomenon) under 28 days (Table ). Two thousand and twenty-eight patients were diagnosed with a non-rheumatic cause of their chief complaint during initial or follow-up evaluation (Table ). Musculoskeletal pain was the most common non-rheumatic diagnosis, with 1253 (61.8%) patients diagnosed during the study period. Within the musculoskeletal pain category, 880 patients (43.4% of all non-rheumatic diagnoses) were diagnosed with musculoskeletal pain of a specific joint, followed by back pain and “other” musculoskeletal pain (e.g., “hand pain”, “foot pain”, etc.). Amplified musculoskeletal pain syndrome, chronic fatigue syndrome, and complex regional pain syndrome together made up 235 patients (11.6%), followed by non-inflammatory rash (7.7%) and recurrent fevers (5.9%). The “other” category totaled 117 patients (5.8%) with various diagnoses listed in Table . The median appointment wait time for patients with non-rheumatic diagnoses was found to be 49 days (IQR 20-69.9) with individual non-rheumatic diagnosis wait time ranging from 14.7 days to 84.0 days (Table ). While national and international registries of pediatric rheumatology patients have grown over the last 10–20 years, analysis of individual center populations has been lacking in the literature. Although viewing the field of pediatric rheumatology through the lens of a single-center experience has limitations with respect to the advancement of treatment and diagnosis of rare diseases, it can shed a unique light on the supply-demand challenges facing the field today. Analyses by Denardo et al., Bowyer et al., and Rosenberg have previously looked into pediatric rheumatology diagnoses at the individual clinic and health system level, but there has been little published in the last 20 years to compare to our current study . It is hard to equate clinic volumes given multiple obscured factors like the number of providers, catchment area, etc., but compared to our median monthly new patient rate of 52–137 patients, these previously reported population numbers equate to an average of 71–172 new patients per clinic per year , demonstrating a substantial difference in patient load. The proportions of rheumatic disease diagnoses within the Denardo et al. and Bowyer et al. cohorts were reported to be 38% and 40.5%, respectively . In the Rosenberg cohort, out of 3268 patient referrals, a diagnosis was reached in only 2098 patients (64.2%), and of those diagnosed, 50.9% had rheumatic disease. Therefore, if we assume that all undiagnosed patients did not have a rheumatic disease (likely not correct), the rheumatic disease diagnosis rate of all referred patients would be 32.6%, with the true proportion likely higher, as some amount of the undiagnosed patients likely did have a yet-to-be-diagnosed rheumatic condition . Again, the comparison to our clinic’s 23.1% rheumatic disease diagnosis rate is difficult given our policy of offering appointments to all referred pediatric patients, but all previously reported cohorts had notably higher rates of rheumatic diagnoses. Juvenile rheumatoid arthritis/JIA was the most common rheumatic diagnosis in all three studies at 53%, 39.4%, and 31.6%, comparable to our JIA prevalence of 45.6% . Of the remaining non-rheumatic diagnoses, musculoskeletal conditions (56%, 36.1%) were most common, but at a smaller proportion than our 61.7% . Therefore, despite the previously reported populations having lower total patient volume and less rheumatic disease overall, the proportions of specific rheumatic conditions within the total rheumatic diagnosis cohort seemed to be similar to our current population, with our clinic having a higher rate of non-rheumatic disease. The pediatric rheumatology workforce supply in the United States is projected to significantly lag demand over the next few decades. As of 2018, 42 out of 50 states were noted to have less than one pediatric rheumatologist per 100,000 children and 30% of practicing pediatric rheumatologists self-reported as likely to retire in the following 10 years . And although there may be almost 400 pediatric rheumatologists practicing in the US and it’s likely that adult rheumatologists may see pediatric patients in various settings, the total clinical full-time equivalents (FTEs) devoted specifically to pediatric rheumatic care was reported to be 287 FTEs in 2015, even when including nurse practitioners (NPs) and physician assistants (PAs) . Demand for pediatric rheumatology care was estimated at 382 FTEs in 2015, already a shortfall of 95 FTEs with the 2015 workforce, and this gap is only expected to worsen by 2030 with the projected supply of 231 FTEs insufficient for the projected demand of 461 FTEs . Strategies have been recommended to increase the supply of pediatric rheumatology providers, including increasing exposure to the field during medical school and residency, decreasing fellowship training from 3- to 2-year commitments, increasing NP and PA utilization, and financial incentive programs . The demand side of the supply-demand shortfall may be a more complicated issue to address. Despite the 4–6 attending physicians, 3–4 nurse practitioners, and 1–3 pediatric rheumatology fellows that saw patients throughout our study period, it was and continues to be a struggle to see our large patient load without long appointment wait times. Moreover, even though there are limited studies focused on wait times for rheumatology evaluation, this is not a problem unique to our division. One study of adult patients referred to Ontario rheumatologists from 2000 to 2013 noted a median wait time from referral to rheumatologist consultation of 74 days, decreasing to 66 days for patients with systemic inflammatory rheumatic disease . In pediatric rheumatology, organizations in the United Kingdom and in Canada have set benchmark times for rheumatology evaluation at 4 weeks from referral for non-systemic JIA, but there is limited data on whether United States pediatric rheumatology centers can or do meet these guidelines . During the study period, the median time between referral and appointment (wait time) for all patients was found to be 42.0 days, outside the recommended 4 weeks for rheumatology appointment wait times. However, for those patients eventually diagnosed with a rheumatic condition, the median wait time was found to be much lower at 13.8 days, well within the recommended timeframe. Wait times for individual rheumatic diagnoses were found to vary, but patients with Raynaud phenomenon were the only ones with wait times outside of 28 days. In those patients diagnosed with a non-rheumatic condition, median wait time was 49.0 days, with infection-related diagnoses (reactive arthritis, serum sickness, transient synovitis) the only category inside of 28 days. These findings seem to suggest that our providers are proficient at triaging referrals based on likelihood of rheumatic disease, recommending earlier appointments for those deemed high-risk and those at low risk receiving later appointments. It might be prudent in our case, and in pediatric rheumatology as a whole, to focus on strategies to decrease demand for non-essential referrals, targeting those 76.9% of new patient referrals that do not have a rheumatic disease. One potential way to reduce referrals for non-rheumatic disease is to target primary care provider education. Previous studies have reported on the inappropriate ordering of laboratory testing by primary care providers, including antinuclear antibody (ANA) levels and rheumatoid factor, and the improper interpretation of musculoskeletal pain as a symptom of rheumatic disease in the pediatric population . The Choosing Wisely campaign has also previously highlighted unnecessary autoantibody panels and repeat ANA testing in its “Top 5” practices that add to the cost of care without improving quality . In our cohort, benign musculoskeletal pain made up 61.8% of our non-rheumatic disease diagnoses and 47.5% of all new patients seen during the study period. In the 1223 patients (46.4% of the cohort) who had musculoskeletal pain listed in the reasoning for referral to pediatric rheumatology, only 11.6% were diagnosed with a rheumatic condition. Similarly, of the 546 patients with “positive ANA” in their referral reason, either as the sole reason or in conjunction with other symptomology, 7.1% were diagnosed with rheumatic disease. By improving the ability of primary care providers to conduct musculoskeletal examinations and correctly order and interpret rheumatology laboratory testing, we may be able to limit referrals for non-rheumatic ailments. An additional focus on the correct identification of benign musculoskeletal pain as a somatic symptom of depression and anxiety may also be helpful in reducing non-rheumatic referrals. In the last decade, numerous studies have shown a decline in the overall mental health of pediatric and adolescent patients, with significant increases in rates of depression, anxiety, and mental-health-related emergency department visits . There is a high prevalence of somatic symptoms in patients with depression and anxiety, and these patients may report only somatic symptoms at their initial primary care provider evaluation . Such a presentation may lead to a pediatric rheumatology referral for evaluation of potential inflammatory causes of pain. In our population, somatic disorders like AMPS and chronic fatigue syndrome were diagnosed in 227 patients from 2019 to 2021, making up 8.6% of all new patients seen during that period. We may be able to reduce the amount of unnecessary pediatric rheumatology referrals by targeting these few simple topics for primary care education, especially in under-resourced communities. This study is limited by its single-center population, which makes generalizability difficult to assess, especially given our practice of offering all patients appointments regardless of the likelihood of true disease during the referral process. The COVID-19 pandemic appearing during the study period may have also altered rheumatology referral quantity and quality. In-person clinic appointments were drastically limited between February 2020 and May 2020 leading to a signficiant drop in new patient appointments. However, the limited dip in referral numbers with a rapid return to baseline levels makes this source of error unlikely. March 2021 was an outlier in terms of referral quantity that does not have such an easy explanation. Patients with non-rheumatic diagnoses doubled from just the month before while rheumatic diagnoses stayed constant. The one potential explanation that has been discussed is that Alabama saw its largest peak in COVID-19 cases in December 2020 - January 2021, so it is possible that the increase in non-rheumatic diagnoses was related to non-specific post-viral symptoms. Finally, “no-shows” of scheduled referrals and those patients diagnosed initially while inpatient were not counted in our analysis, and it is unclear how this affected the overall rates of diagnosed rheumatic disease. As the field of pediatric rheumatology expands in its diagnostic and treatment capabilities, a serious workforce supply-demand gap has the potential to limit our ability to care for patients with rheumatic disease. As shown by our analysis and previous studies, a sizable proportion of patients referred to and evaluated in pediatric rheumatology clinics are not diagnosed with a rheumatic condition. Timely pediatric rheumatology evaluation may be achieved through the limitation of non-rheumatic disease referrals, with improved education and increased management of these conditions in the primary care space. With the supply of pediatric rheumatology providers projected to decline, intervention in referrals made to pediatric rheumatology may allow for better accessibility and quality for care for patients requiring ongoing management of a diagnosed rheumatic disease. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
The assessment of dietary carotenoid intake of the Cardio-Med FFQ using food records and biomarkers in an Australian cardiology cohort: a pilot validation
b9998464-ef9a-4798-8362-bc8fb0c12232
11016364
Internal Medicine[mh]
Oxidative stress and inflammation are risk factors associated with the development of a range of chronic diseases including Cardiovascular Disease (CVD). Diet can influence the risk of chronic disease development and modulate these risk factors. A dietary pattern known to favourably reduce oxidative stress and inflammation is the Mediterranean Diet (MedDiet). The MedDiet pattern is predominantly a plant-based diet promoting a frequent and large consumption of fruits, vegetables and other plant-based foods including legumes and wholegrains, which are a major source of vitamins, minerals and fibre. Plant-based foods also contain bioactive constituents such as carotenoids, with fruits and vegetables being a concentrated source. Carotenoids are naturally occurring compounds that are found in plants. Humans are unable to synthesise carotenoids and they must be consumed from dietary sources. Carotenoids are associated with many health benefits and through their established mechanistic properties can reduce oxidative stress and inflammation. This has been associated with a reduction in the risk of chronic diseases which have underlying oxidative and inflammatory pathways in their aetiology, including Coronary Heart Disease (CHD) ; the most prevalent form of CVD. There are >600 carotenoids found within nature and foods. The six major dietary carotenoids detectable in plasma, and thus most extensively examined in validation studies, include: β-carotene, α-carotene, lycopene, β-cryptoxanthin, lutein and zeaxanthin. It is important that measurement of the diet can be completed accurately when assessing diet–disease associations. Dietary evaluation can be undertaken via multiple self-report assessment methods, for example, food record (FR), 24-h food recall and FFQ. FFQs are advantageous since they can estimate nutrient intakes over longer periods of time, are low cost and relatively easy to use. Despite their frequent use, the accuracy of dietary information collected by FFQs is imperfect. Systematic and/or random measurement error tends to overestimate consumption, which is a significant limitation. Validation techniques are employed to determine the accuracy of particular methods used to collect data, including questionnaires. During validation of a FFQ, a reference method (e.g. FR or 24-h food recall) is often used for comparison. It is important to note that such self-reported reference methods are themselves open to the same random and systematic errors as the FFQ, which may impact the validation process through the perpetuation of correlated errors. To overcome this limitation, biochemical markers (biomarkers) can be used as the reference method given they provide an objective measure and have errors that are independent to the dietary tool being validated. Previous reports describe a dose–response relationship existing between carotenoid intake and subsequent concentration in plasma, suggesting that carotenoid biomarkers are a reliable proxy for dietary carotenoid intake The FFQ validation process can be enhanced by utilisation of two reference methods, i.e., biomarkers and traditional dietary assessment measures (e.g. FRs) in a triangulation validation technique known as the ‘methods of triads’, which allows dietary measures to be correlated against a theoretically true intake through derivation of a validity coefficient (VC). There is a scarcity of Australian FFQs developed to assess carotenoid intake (and as an extension, adherence to the MedDiet pattern) and even fewer tools which have been validated in a cohort with CHD using biomarkers or the methods of triads process. In 2013, we developed the Cardio-Med Survey Tool (CMST) FFQ to measure dietary intake in a multi-ethnic Australian cardiology population with an ability to measure MedDiet adherence through inclusion of foods that are consistent with the MedDiet pattern. The CMST-FFQ was found to be a reliable tool for measuring macro- and micronutrient intake. This tool was modified (CMST-FFQ-version-2 (v2)) to enable an assessment of carotenoid intake through expansion of the range and types of fruits and vegetables included. The aim of the present pilot study was to assess the validity of the CMST-FFQ-v2 for estimating dietary carotenoid intake over the preceding year. Validity was assessed by comparing the assessment of the consumption of these compounds against those measured by 7DFR and objectively measured biomarkers (plasma carotenoid levels) in an Australian cardiology cohort. Study design Data was obtained from participants at study entry (baseline) in the AUStralian MEDiterranean diet (AUSMED) Heart Trial pilot study. The AUSMED Heart Trial is a multi-centre, randomised control MedDiet intervention for secondary prevention of CHD in a multi-ethnic Australian population. The intervention lasted for 6 months with a 12-month follow-up. Inclusion criteria included those who were ≥18 years, had adequate English comprehension for reading and writing and had experienced at least one acute coronary syndrome: acute myocardial infarction (AMI), angina pectoris with evidence of CHD, coronary artery bypass graft or percutaneous coronary intervention. Ethical standards disclosure This study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving research participants were approved by The Northern Hospital ethics committee (HREC P02/13), St Vincent’s Hospital ethics committee (HREC-A 016/13) and La Trobe University ethics committee (FHEC 13/159). Written informed consent was obtained from all subjects. The study is also registered on the Australian New Zealand Clinical Trials Registry (ACTRN12616000156482). Participants Participants were recruited from two major hospitals in Melbourne, Australia, including inpatient and outpatient cardiology settings. A total of 65 participants were enrolled in the baseline phase of the AUSMED pilot study between 2014 and 2016. To be included in the present validation study, participants were required to have completed the CMST-FFQ-v2, a 7-day FR (7DFR) and provided a blood sample. One participant did not complete both FFQ and 7DFR and five participants had inadequate blood sample volumes; thus 59 participants had complete data across all three measurement methods. No participants were excluded based upon the percentage of questions omitted on the FFQ (cut-off for exclusion was <90% complete, however, under-reporters ( n 15) and over-reporters ( n 5) of energy intake determined by the Goldberg method (reported by Black were excluded from analysis. Under-reporters were defined as EI (energy intake):EER (estimated energy required) <0.75, normal reporters were defined as EI:EER ≥0.75–1.25 and over reporters as EI:EE >1.25. A final total of 39 participants were included in the validation analysis. Dietary intake Food-frequency questionnaire Dietary intake was assessed using the self-report semi-quantitative CMST-FFQ-v2, a paper-based modified version of the original 97-item CMST-FFQ, where design and validation has been previously described. The CMST-FFQ was originally developed to enable dietary assessment in a cardiology population and measure MedDiet adherence in Australia. Relevant modifications to the CMST-FFQ included the addition of several fruit categories (citrus, berries, melon, other, stone and dried), the red/orange vegetable category and two cereal categories (crispbreads/crackers and other grains). Fruits, vegetables and grains are key components of the MedDiet and concentrated sources of carotenoids, thus evaluating their consumption is crucial when assessing carotenoid intake. The CMST-FFQ-v2 consists of 105 items including a 51-item FFQ (of which 6 are specific to fruits and 11 to vegetable and legume intake), and 54 supplementary dietary questions: 14 portion questions, 30 diet questions and 10 food habit questions. The FFQ required participants to report their consumption of food/beverages over the preceding 12 months and provides a choice of 10 response categories ranging from ‘never’ up to ‘3 times per day’. Portion size photographs were used to provide estimates of food portions for 14 commonly consumed foods. Foods with no portion options were assigned median portions from the 2011/12 Australian National Nutrition survey, natural portion sizes, or as a last resort, portions recommended by the Australian dietary guidelines. The supplementary dietary questions encompassed information regarding fat and oil consumption, types of foods consumed, cooking methods, beverages and alcohol intake. Carotenoid bioavailability is subject to considerable variability, influenced by an array of factors both physiological and dietary. Carotenoids are lipophilic and demonstrate an increased bioavailability alongside the ingestion of dietary fats . How carotenoids are consumed is important to consider, particularly in the context of the MedDiet, as carotenoid containing vegetables are often consumed alongside healthy fats like olive oil. The presence of these fats play a role in enhancing the absorption of carotenoids and this synergistic interaction is important in maximising the bioavailability of these crucial nutrients. Demographic data, anthropometric data, past medical history, supplement usage and smoking history was also collected from participants in the self-reported health and lifestyle section of the CMST, at baseline study visits or from medical records. Food records Participants completed a 7DFR with details described in Mayr et al. Briefly, verbal and written instructions were provided regarding accurate completion prior to the baseline appointment by a research dietitian. Instructions included direction to record food and beverage information at the time of consumption, such as: amount/volume of all items, food type, brand, method of preparation and recipes. Food scales were advised to be used where possible, and where not possible, direction was given to use household measures. For meals not eaten at home, participants were asked to provide as much detail as possible with approximate amounts consumed using the tools provided in the written information. Participants were instructed to complete the CMST-FFQ-v2 and 7DFR in the week prior to blood collection at the baseline appointment. All documents were checked for completeness by the study dietitian and nuances/missing information clarified with participants. Nutritional analysis Food records Dietary intake of carotenoids (β-carotene, α-carotene, lycopene, β-cryptoxanthin, lutein and zeaxanthin) from the 7DFR were calculated using the United States Department of Agriculture National Nutrient Database for Standard Reference (SR) Release 28 (USDA-SR-28) embedded within an Australian nutrient composition software program, FoodWorks (Version 10, Xyris Software Pty Ltd, Brisbane, Australia). Energy intake was assessed using the NUTTAB/AUSNUT databases within FoodWorks. The data was transposed from the 7DFR manually into FoodWorks by a study dietitian. For consistency of food items entry into FoodWorks, a food/product item repository was constructed to ensure identical selection of food items within the USDA-SR-28 database. The 7DFR analysis was also cross-checked by a dietitian to ensure consistency and accuracy. Food-frequency questionnaire Dietary intake of carotenoids from the FFQ within CMST-FFQ-2 (here on referred to as FFQ) was computed via a 3-step method: Grams of food per day was computed by multiplication of frequency by portion size in grams. A specifically constructed nutrient database utilising the USDA and NUTTAB/AUSNUT databases in FoodWorks contained the energy and carotenoid profile per gram for each food/beverage item in the FFQ. Each item in this database was multiplied by portion size intake (grams) per day. FFQ items that contributed to carotenoid intake (no matter how small) included: fruits, vegetables, processed meat, offal, cereals and grains (breakfast cereal, pasta, noodles, bread, crispbreads), dairy (yoghurt, cheese, milk), eggs, nuts and seeds, snacks (all except muesli bars and lollies), chocolate (milk and dark variety), meals not prepared at home (all items), herbs and spices (oregano, curry powder, cinnamon, chilli), condiments (lemon juice, tomato sauce, pepper), margarine and butter, nut spreads, mayonnaise and salad dressings, and beverages (herbal tea, fruit juice, red wine and cider). Total daily carotenoid intake was obtained by tallying daily individual carotenoid intake across each food/beverage item consumed. Plasma carotenoid biomarkers Fasting blood samples were collected by experienced personnel using standard venepuncture techniques. Upon collection, blood samples were processed immediately and centrifuged, with plasma collected and stored in aliquots at –80°C until analysis. The tubes containing plasma samples to be analysed for carotenoids were immediately wrapped in foil to minimise light exposure. Plasma carotenoid samples were sent to an external laboratory (University of Newcastle, Newcastle, NSW, Australia) for analysis. High Performance liquid chromatography methodology was used to determine β-carotene, α-carotene, β-cryptoxanthin, lycopene and lutein/zeaxanthin (combined) concentrations in plasma. Total carotenoid concentration was calculated from the addition of all measured plasma carotenoids. All extractions were carried out under red light in a darkened laboratory, using validated methodology as described in Wood et al. Sample carotenoid peaks were identified and quantified using Agilent 1200 Series High Performance Liquid Chromatograph with Chemstations software (Agilent Corporation, Germany). Separately, serum cholesterol was measured at a commercial laboratory (Dorevitch Pathology Pty Ltd, Heidelberg, VIC, Australia) using an automated blood analyser (ADVIA 2400 Chemistry System, Siemens). Statistics Descriptive statistics for baseline characteristics were presented as means ± standard deviation ( SD ), medians (interquartile range (IQR)) or frequencies (percentage) as appropriate. Carotenoid intakes measured from the FFQ and 7DFR were adjusted for energy intake using the nutrient residual method. Differences between measured intakes from the two dietary methods were examined using Wilcoxon-signed rank-test or independent Student’s t -test. Plasma carotenoid biomarker concentrations were adjusted for plasma cholesterol concentrations using the residual method due to a relationship existing between serum cholesterol and carotenoid levels. Spearman’s Rho or Pearson correlation ( r ) coefficients were used as measures of correlation to assess the validity between the three dietary assessment methods (FFQ vs. 7DFR, FFQ vs. biomarker and 7DFR vs. biomarker) for each individual carotenoid and total carotenoid intake, depending on variable distribution. Correlations were evaluated as poor (<0·2), moderate (0·2–0·6) or good (>0·6). Correlations between known confounding variables (including body mass index (BMI), gender, age, supplement use and smoking history) and measured carotenoid intakes from the FFQ and 7DFR were assessed using Spearman correlation (ρ) coefficients to determine need for partial correlations (refer to Supplementary Materials 2 , Table S1 and S2 ). No significant correlations were observed, thus obviating the need for partial correlations. Correlations between each of the dietary methods were utilised to enable calculation of the VC between theoretical true intake and estimated intakes from FFQ, 7DFR (the reference method) and plasma carotenoid biomarkers using the methods of triads. Once correlation coefficients had been estimated, the following equations were utilised to calculate the VC for each carotenoid measurement method with 95% CI: (1) (2) (3) where T = true unknown long-term dietary intake, r = correlation coefficient; Q = FFQ, R = 7DFR; B = biomarker. This analysis assumes random errors in each of the methods are uncorrelated and a positive linear correlation exists between estimations of true intake and dietary intake. Ocke and Kaaks suggests that the range for the VC utilises the estimated VC as the upper limit for all measures. The correlation coefficient between FFQ and biomarker is used as the lower limit for both FFQ and biomarker and correlation coefficient between 7DFR and biomarker is utilised as the lower limit for the 7DFR. VCs were classified as weak ( ρ < 0.2), moderate (0.2 ≤ ρ ≤ 0.6) and high ( ρ > 0.6). Analyses were performed using the statistical software SPSS® version 27 (IBM Corp, released 2021) with reported p -values being two-tailed and the level of significance level set at 5%. Data was obtained from participants at study entry (baseline) in the AUStralian MEDiterranean diet (AUSMED) Heart Trial pilot study. The AUSMED Heart Trial is a multi-centre, randomised control MedDiet intervention for secondary prevention of CHD in a multi-ethnic Australian population. The intervention lasted for 6 months with a 12-month follow-up. Inclusion criteria included those who were ≥18 years, had adequate English comprehension for reading and writing and had experienced at least one acute coronary syndrome: acute myocardial infarction (AMI), angina pectoris with evidence of CHD, coronary artery bypass graft or percutaneous coronary intervention. This study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving research participants were approved by The Northern Hospital ethics committee (HREC P02/13), St Vincent’s Hospital ethics committee (HREC-A 016/13) and La Trobe University ethics committee (FHEC 13/159). Written informed consent was obtained from all subjects. The study is also registered on the Australian New Zealand Clinical Trials Registry (ACTRN12616000156482). Participants were recruited from two major hospitals in Melbourne, Australia, including inpatient and outpatient cardiology settings. A total of 65 participants were enrolled in the baseline phase of the AUSMED pilot study between 2014 and 2016. To be included in the present validation study, participants were required to have completed the CMST-FFQ-v2, a 7-day FR (7DFR) and provided a blood sample. One participant did not complete both FFQ and 7DFR and five participants had inadequate blood sample volumes; thus 59 participants had complete data across all three measurement methods. No participants were excluded based upon the percentage of questions omitted on the FFQ (cut-off for exclusion was <90% complete, however, under-reporters ( n 15) and over-reporters ( n 5) of energy intake determined by the Goldberg method (reported by Black were excluded from analysis. Under-reporters were defined as EI (energy intake):EER (estimated energy required) <0.75, normal reporters were defined as EI:EER ≥0.75–1.25 and over reporters as EI:EE >1.25. A final total of 39 participants were included in the validation analysis. Food-frequency questionnaire Dietary intake was assessed using the self-report semi-quantitative CMST-FFQ-v2, a paper-based modified version of the original 97-item CMST-FFQ, where design and validation has been previously described. The CMST-FFQ was originally developed to enable dietary assessment in a cardiology population and measure MedDiet adherence in Australia. Relevant modifications to the CMST-FFQ included the addition of several fruit categories (citrus, berries, melon, other, stone and dried), the red/orange vegetable category and two cereal categories (crispbreads/crackers and other grains). Fruits, vegetables and grains are key components of the MedDiet and concentrated sources of carotenoids, thus evaluating their consumption is crucial when assessing carotenoid intake. The CMST-FFQ-v2 consists of 105 items including a 51-item FFQ (of which 6 are specific to fruits and 11 to vegetable and legume intake), and 54 supplementary dietary questions: 14 portion questions, 30 diet questions and 10 food habit questions. The FFQ required participants to report their consumption of food/beverages over the preceding 12 months and provides a choice of 10 response categories ranging from ‘never’ up to ‘3 times per day’. Portion size photographs were used to provide estimates of food portions for 14 commonly consumed foods. Foods with no portion options were assigned median portions from the 2011/12 Australian National Nutrition survey, natural portion sizes, or as a last resort, portions recommended by the Australian dietary guidelines. The supplementary dietary questions encompassed information regarding fat and oil consumption, types of foods consumed, cooking methods, beverages and alcohol intake. Carotenoid bioavailability is subject to considerable variability, influenced by an array of factors both physiological and dietary. Carotenoids are lipophilic and demonstrate an increased bioavailability alongside the ingestion of dietary fats . How carotenoids are consumed is important to consider, particularly in the context of the MedDiet, as carotenoid containing vegetables are often consumed alongside healthy fats like olive oil. The presence of these fats play a role in enhancing the absorption of carotenoids and this synergistic interaction is important in maximising the bioavailability of these crucial nutrients. Demographic data, anthropometric data, past medical history, supplement usage and smoking history was also collected from participants in the self-reported health and lifestyle section of the CMST, at baseline study visits or from medical records. Food records Participants completed a 7DFR with details described in Mayr et al. Briefly, verbal and written instructions were provided regarding accurate completion prior to the baseline appointment by a research dietitian. Instructions included direction to record food and beverage information at the time of consumption, such as: amount/volume of all items, food type, brand, method of preparation and recipes. Food scales were advised to be used where possible, and where not possible, direction was given to use household measures. For meals not eaten at home, participants were asked to provide as much detail as possible with approximate amounts consumed using the tools provided in the written information. Participants were instructed to complete the CMST-FFQ-v2 and 7DFR in the week prior to blood collection at the baseline appointment. All documents were checked for completeness by the study dietitian and nuances/missing information clarified with participants. Dietary intake was assessed using the self-report semi-quantitative CMST-FFQ-v2, a paper-based modified version of the original 97-item CMST-FFQ, where design and validation has been previously described. The CMST-FFQ was originally developed to enable dietary assessment in a cardiology population and measure MedDiet adherence in Australia. Relevant modifications to the CMST-FFQ included the addition of several fruit categories (citrus, berries, melon, other, stone and dried), the red/orange vegetable category and two cereal categories (crispbreads/crackers and other grains). Fruits, vegetables and grains are key components of the MedDiet and concentrated sources of carotenoids, thus evaluating their consumption is crucial when assessing carotenoid intake. The CMST-FFQ-v2 consists of 105 items including a 51-item FFQ (of which 6 are specific to fruits and 11 to vegetable and legume intake), and 54 supplementary dietary questions: 14 portion questions, 30 diet questions and 10 food habit questions. The FFQ required participants to report their consumption of food/beverages over the preceding 12 months and provides a choice of 10 response categories ranging from ‘never’ up to ‘3 times per day’. Portion size photographs were used to provide estimates of food portions for 14 commonly consumed foods. Foods with no portion options were assigned median portions from the 2011/12 Australian National Nutrition survey, natural portion sizes, or as a last resort, portions recommended by the Australian dietary guidelines. The supplementary dietary questions encompassed information regarding fat and oil consumption, types of foods consumed, cooking methods, beverages and alcohol intake. Carotenoid bioavailability is subject to considerable variability, influenced by an array of factors both physiological and dietary. Carotenoids are lipophilic and demonstrate an increased bioavailability alongside the ingestion of dietary fats . How carotenoids are consumed is important to consider, particularly in the context of the MedDiet, as carotenoid containing vegetables are often consumed alongside healthy fats like olive oil. The presence of these fats play a role in enhancing the absorption of carotenoids and this synergistic interaction is important in maximising the bioavailability of these crucial nutrients. Demographic data, anthropometric data, past medical history, supplement usage and smoking history was also collected from participants in the self-reported health and lifestyle section of the CMST, at baseline study visits or from medical records. Participants completed a 7DFR with details described in Mayr et al. Briefly, verbal and written instructions were provided regarding accurate completion prior to the baseline appointment by a research dietitian. Instructions included direction to record food and beverage information at the time of consumption, such as: amount/volume of all items, food type, brand, method of preparation and recipes. Food scales were advised to be used where possible, and where not possible, direction was given to use household measures. For meals not eaten at home, participants were asked to provide as much detail as possible with approximate amounts consumed using the tools provided in the written information. Participants were instructed to complete the CMST-FFQ-v2 and 7DFR in the week prior to blood collection at the baseline appointment. All documents were checked for completeness by the study dietitian and nuances/missing information clarified with participants. Food records Dietary intake of carotenoids (β-carotene, α-carotene, lycopene, β-cryptoxanthin, lutein and zeaxanthin) from the 7DFR were calculated using the United States Department of Agriculture National Nutrient Database for Standard Reference (SR) Release 28 (USDA-SR-28) embedded within an Australian nutrient composition software program, FoodWorks (Version 10, Xyris Software Pty Ltd, Brisbane, Australia). Energy intake was assessed using the NUTTAB/AUSNUT databases within FoodWorks. The data was transposed from the 7DFR manually into FoodWorks by a study dietitian. For consistency of food items entry into FoodWorks, a food/product item repository was constructed to ensure identical selection of food items within the USDA-SR-28 database. The 7DFR analysis was also cross-checked by a dietitian to ensure consistency and accuracy. Food-frequency questionnaire Dietary intake of carotenoids from the FFQ within CMST-FFQ-2 (here on referred to as FFQ) was computed via a 3-step method: Grams of food per day was computed by multiplication of frequency by portion size in grams. A specifically constructed nutrient database utilising the USDA and NUTTAB/AUSNUT databases in FoodWorks contained the energy and carotenoid profile per gram for each food/beverage item in the FFQ. Each item in this database was multiplied by portion size intake (grams) per day. FFQ items that contributed to carotenoid intake (no matter how small) included: fruits, vegetables, processed meat, offal, cereals and grains (breakfast cereal, pasta, noodles, bread, crispbreads), dairy (yoghurt, cheese, milk), eggs, nuts and seeds, snacks (all except muesli bars and lollies), chocolate (milk and dark variety), meals not prepared at home (all items), herbs and spices (oregano, curry powder, cinnamon, chilli), condiments (lemon juice, tomato sauce, pepper), margarine and butter, nut spreads, mayonnaise and salad dressings, and beverages (herbal tea, fruit juice, red wine and cider). Total daily carotenoid intake was obtained by tallying daily individual carotenoid intake across each food/beverage item consumed. Plasma carotenoid biomarkers Fasting blood samples were collected by experienced personnel using standard venepuncture techniques. Upon collection, blood samples were processed immediately and centrifuged, with plasma collected and stored in aliquots at –80°C until analysis. The tubes containing plasma samples to be analysed for carotenoids were immediately wrapped in foil to minimise light exposure. Plasma carotenoid samples were sent to an external laboratory (University of Newcastle, Newcastle, NSW, Australia) for analysis. High Performance liquid chromatography methodology was used to determine β-carotene, α-carotene, β-cryptoxanthin, lycopene and lutein/zeaxanthin (combined) concentrations in plasma. Total carotenoid concentration was calculated from the addition of all measured plasma carotenoids. All extractions were carried out under red light in a darkened laboratory, using validated methodology as described in Wood et al. Sample carotenoid peaks were identified and quantified using Agilent 1200 Series High Performance Liquid Chromatograph with Chemstations software (Agilent Corporation, Germany). Separately, serum cholesterol was measured at a commercial laboratory (Dorevitch Pathology Pty Ltd, Heidelberg, VIC, Australia) using an automated blood analyser (ADVIA 2400 Chemistry System, Siemens). Dietary intake of carotenoids (β-carotene, α-carotene, lycopene, β-cryptoxanthin, lutein and zeaxanthin) from the 7DFR were calculated using the United States Department of Agriculture National Nutrient Database for Standard Reference (SR) Release 28 (USDA-SR-28) embedded within an Australian nutrient composition software program, FoodWorks (Version 10, Xyris Software Pty Ltd, Brisbane, Australia). Energy intake was assessed using the NUTTAB/AUSNUT databases within FoodWorks. The data was transposed from the 7DFR manually into FoodWorks by a study dietitian. For consistency of food items entry into FoodWorks, a food/product item repository was constructed to ensure identical selection of food items within the USDA-SR-28 database. The 7DFR analysis was also cross-checked by a dietitian to ensure consistency and accuracy. Dietary intake of carotenoids from the FFQ within CMST-FFQ-2 (here on referred to as FFQ) was computed via a 3-step method: Grams of food per day was computed by multiplication of frequency by portion size in grams. A specifically constructed nutrient database utilising the USDA and NUTTAB/AUSNUT databases in FoodWorks contained the energy and carotenoid profile per gram for each food/beverage item in the FFQ. Each item in this database was multiplied by portion size intake (grams) per day. FFQ items that contributed to carotenoid intake (no matter how small) included: fruits, vegetables, processed meat, offal, cereals and grains (breakfast cereal, pasta, noodles, bread, crispbreads), dairy (yoghurt, cheese, milk), eggs, nuts and seeds, snacks (all except muesli bars and lollies), chocolate (milk and dark variety), meals not prepared at home (all items), herbs and spices (oregano, curry powder, cinnamon, chilli), condiments (lemon juice, tomato sauce, pepper), margarine and butter, nut spreads, mayonnaise and salad dressings, and beverages (herbal tea, fruit juice, red wine and cider). Total daily carotenoid intake was obtained by tallying daily individual carotenoid intake across each food/beverage item consumed. Fasting blood samples were collected by experienced personnel using standard venepuncture techniques. Upon collection, blood samples were processed immediately and centrifuged, with plasma collected and stored in aliquots at –80°C until analysis. The tubes containing plasma samples to be analysed for carotenoids were immediately wrapped in foil to minimise light exposure. Plasma carotenoid samples were sent to an external laboratory (University of Newcastle, Newcastle, NSW, Australia) for analysis. High Performance liquid chromatography methodology was used to determine β-carotene, α-carotene, β-cryptoxanthin, lycopene and lutein/zeaxanthin (combined) concentrations in plasma. Total carotenoid concentration was calculated from the addition of all measured plasma carotenoids. All extractions were carried out under red light in a darkened laboratory, using validated methodology as described in Wood et al. Sample carotenoid peaks were identified and quantified using Agilent 1200 Series High Performance Liquid Chromatograph with Chemstations software (Agilent Corporation, Germany). Separately, serum cholesterol was measured at a commercial laboratory (Dorevitch Pathology Pty Ltd, Heidelberg, VIC, Australia) using an automated blood analyser (ADVIA 2400 Chemistry System, Siemens). Descriptive statistics for baseline characteristics were presented as means ± standard deviation ( SD ), medians (interquartile range (IQR)) or frequencies (percentage) as appropriate. Carotenoid intakes measured from the FFQ and 7DFR were adjusted for energy intake using the nutrient residual method. Differences between measured intakes from the two dietary methods were examined using Wilcoxon-signed rank-test or independent Student’s t -test. Plasma carotenoid biomarker concentrations were adjusted for plasma cholesterol concentrations using the residual method due to a relationship existing between serum cholesterol and carotenoid levels. Spearman’s Rho or Pearson correlation ( r ) coefficients were used as measures of correlation to assess the validity between the three dietary assessment methods (FFQ vs. 7DFR, FFQ vs. biomarker and 7DFR vs. biomarker) for each individual carotenoid and total carotenoid intake, depending on variable distribution. Correlations were evaluated as poor (<0·2), moderate (0·2–0·6) or good (>0·6). Correlations between known confounding variables (including body mass index (BMI), gender, age, supplement use and smoking history) and measured carotenoid intakes from the FFQ and 7DFR were assessed using Spearman correlation (ρ) coefficients to determine need for partial correlations (refer to Supplementary Materials 2 , Table S1 and S2 ). No significant correlations were observed, thus obviating the need for partial correlations. Correlations between each of the dietary methods were utilised to enable calculation of the VC between theoretical true intake and estimated intakes from FFQ, 7DFR (the reference method) and plasma carotenoid biomarkers using the methods of triads. Once correlation coefficients had been estimated, the following equations were utilised to calculate the VC for each carotenoid measurement method with 95% CI: (1) (2) (3) where T = true unknown long-term dietary intake, r = correlation coefficient; Q = FFQ, R = 7DFR; B = biomarker. This analysis assumes random errors in each of the methods are uncorrelated and a positive linear correlation exists between estimations of true intake and dietary intake. Ocke and Kaaks suggests that the range for the VC utilises the estimated VC as the upper limit for all measures. The correlation coefficient between FFQ and biomarker is used as the lower limit for both FFQ and biomarker and correlation coefficient between 7DFR and biomarker is utilised as the lower limit for the 7DFR. VCs were classified as weak ( ρ < 0.2), moderate (0.2 ≤ ρ ≤ 0.6) and high ( ρ > 0.6). Analyses were performed using the statistical software SPSS® version 27 (IBM Corp, released 2021) with reported p -values being two-tailed and the level of significance level set at 5%. Demographic and clinical characteristics are presented in Table . The mean age of participants was 63.5 years, and a large proportion (87.2%) were male. The mean BMI of participants was 29.1 kg/m 2 , 17.9% were current smokers and 74.4 % of the cohort had experienced an AMI. Table presents the crude and energy-adjusted carotenoids as measured by the FFQ and 7DFR. The mean energy intake measured by the FFQ was less than the 7DFR although not significantly different. The median intake of both crude and energy-adjusted β-carotene, α-carotene, lycopene and total carotenoids was lower in the FFQ compared to the 7DFR with all differences statistically significant. Intakes determined by the FFQ ranged from 1.08-fold lower for total carotenoid intake to greater than 3-fold lower for α-carotene intake for both crude and energy-adjusted measures. The median FFQ intake for crude lutein/zeaxanthin was over 2-fold higher than estimated by the 7DFR (3588.5 (2021.2–6031.9) vs. 1667.3 (1239.7–3588.6) µg/d, p =0.004), with the same trend identified for energy-adjusted values (3813.8 (1267.5–3656.6) vs. 1877.1 (1267.5–3656.6) µg/d, p =0.002). Table presents the crude and cholesterol-adjusted median and IQR of plasma biomarker measurements for each of the five carotenoids, total carotenoids and cholesterol. Cholesterol adjusted median plasma biomarker concentrations ranged from 0.04 mg/l (α-carotene) to 1.30 mg/l (total carotenoid), with crude values remaining almost identical to cholesterol adjusted values (Table ). Table presents the Spearman correlation coefficients between all the measured carotenoid values from the dietary assessment methods (FFQ and 7DFR) and the plasma biomarkers. Moderate correlations between the energy-adjusted carotenoids measured by FFQ and 7DFR were observed for all carotenoids except for lycopene. The strongest and statistically significant correlations were observed for β-carotene and lutein/zeaxanthin (ρ=0.39 and 0.32, p <0.05, respectively). All other carotenoids had non-significant correlations with the poorest correlation observed for lycopene (ρ=0.15, p >0.05). The crude correlations remained similar with a trend towards some smaller correlations compared to the energy-adjusted values (except for lycopene which increased marginally in correlation strength from 0.15 to 0.22, a difference of 0.07). Moderate correlations were observed for all energy-adjusted carotenoids measured by FFQ and biomarker, while significant correlations observed for β-carotene, β-cryptoxanthin and total carotenoids with the strongest correlations observed for β-carotene and total carotenoids ( =0.39 and 0.37, p <0.05, respectively). The remaining carotenoids demonstrated non-significant correlations with the poorest correlations observed for α-carotene ( =0.21, p >0.05) and lycopene ( =0.21, p >0.05). The crude correlations for the FFQ vs. biomarker remained static or trended towards being marginally smaller compared to energy-adjusted values (with lutein the only carotenoid to marginally increase). Crude and energy-adjusted correlations tended to be stronger between the biomarker and 7DFR compared to the biomarker and FFQ, except for total carotenoids. The correlations between each of the three measurement methods (FFQ, 7DFR and biomarkers) for each measured carotenoid were used to calculate the VCs using the methods of triads. Table presents these calculated VCs alongside the 95% CI and the range for the VC. The energy-adjusted VCs for the FFQ (against true intake) for all measured carotenoids were moderate except for total carotenoids which were classified as high. VCs for the FFQ ranged from 0.33 (α-carotene) to 0.61 (total carotenoids). The FFQ VCs for total carotenoids and β-carotene were the strongest ( =0.61 and 0.59 respectively), followed by lutein/zeaxanthin, β-cryptoxanthin and lycopene ( =0.52, 0.42 and 0.37, respectively), with α-carotene displaying the poorest VC ( =0.33). The FFQ VCs were generally smaller in comparison to the 7DFR and biomarker VCs; the exception being for lutein/zeaxanthin which was stronger than the VC for biomarkers and total carotenoids which was larger than the VC for 7DFRs. All trends observed remained similar for crude VCs, although a trend towards larger VCs were observed for most carotenoids. The CMST-FFQ-v2 was developed to measure diet quality and adherence to traditional dietary patterns, such as the Mediterranean diet, in a culturally diverse Australian cardiology population. We previously demonstrated that the FFQ has good test–retest reliability and moderate validity against 7DFR in measuring energy, protein, carbohydrate and selected micronutrient intakes. The aim of this current study was to compare the CMST-FFQ-v2 in measuring the energy-adjusted dietary carotenoid intake with intake estimated from a 7DFR and from plasma carotenoid concentrations, in a cohort of individuals with CHD. This assessment of the validity of the FFQ involved the calculation of correlation coefficients and VCs. The results demonstrated a moderate and significant correlation between the FFQ and plasma biomarker for β-carotene, β-cryptoxanthin and total carotenoids, while the FFQ VCs demonstrated a moderate to strong correlation for all measured carotenoids. Dietary carotenoid intakes were energy adjusted and analysed both by FFQ and 7DFR. The mean dietary carotenoid intakes measured by the FFQ were within the ranges observed in several other studies except for α-carotene and lycopene, which were lower in our study. This may indicate that our FFQ is not sensitive enough to adequately capture intake of both α-carotene and lycopene, whereas it is comparable to other FFQs for the balance of carotenoids measured. Weighed FRs are the gold standard in food intake methodology and usually the first method of choice when validating a FFQ. In this study we have used the 7DFR as the method of reference, and additionally, we used the objective measure of plasma carotenoids (biomarkers) as another method of comparison through application of the method of triads. Three out of the five FFQ-measured carotenoids (β-carotene, α-carotene, lycopene), plus total carotenoids, had significantly smaller mean intakes than those reported from the 7DFR. Typically, FFQs are recognised to overestimate energy and nutrient intake compared to other dietary assessment measures. Our observations may be explained by the allocation of median serving size when portion selection was unavailable. This occurred for the red/orange vegetables group, which are indicators of α-carotene and lycopene intake. Additionally, aggregating individual foods into a single food group may cause dilution of true measured intake, e.g. α-carotene rich foods (orange/yellow vegetables and fruits) and lycopene rich foods (tomato and watermelon) are combined together or with other foods that differ in carotenoid composition and concentration. This can also be problematic when the individual foods within a composite group are not consumed in the same frequency or portion leading to a reduced ability to differentiate between single food items. Plasma carotenoids have been shown to be a useful and objective biomarker for fruit and vegetable intakes, which are the main food sources of carotenoids, and a reliable method for prediction of dietary carotenoid intake. Plasma carotenoid concentration can however be impacted by external factors outside of dietary intake, for example: baseline plasma carotenoid concentration of an individual, physiological variability in absorption and digestion, genetic and lifestyle factors (e.g. gender, age, BMI, smoking history), cooking methods, amount of fat consumed in meals (as carotenoids are fat soluble) and individual vitamin A status. As a result of the random variability influencing plasma concentrations unrelated to dietary intake, correlation coefficients observed between FFQ intake and biomarkers are often less than 0.4, as was the case in our study. There is a high degree of variability of reported correlations for dietary intake and plasma concentrations among different studies. A review by Burrows et al. incorporating 124 international studies identified correlations between FFQ intake and carotenoid biomarkers ranging from 0.26 to 0.39. This is comparable to the correlation range observed in our study (0.21–0.39). Individual carotenoid correlations observed in the review by Burrows et al. , and our study were also similar, except for β-carotene, where we identified a larger correlation (0.39 vs. 0.27) and α-carotene, where we recorded a smaller correlation (0.21 vs. 0.34). Correlations observed in our study for β-cryptoxanthin (ρ=0.33), lutein/zeaxanthin (ρ=0.25), and lycopene (ρ=0.21), were within the range reported in three Australian validation studies: β-cryptoxanthin, -0.002–0.46; lycopene, 0.13–0.29; lutein/zeaxanthin, 0.03–0.29. The correlations in our study were observed to be larger for β-carotene (0.39 vs. 0.22–0.28) compared to the Australian studies while marginally lower for α-carotene (0.21 vs. 0.26–0.36). Carotenoids that are ubiquitous in the food supply and those consumed in larger quantities showed stronger correlations between dietary intake and plasma level, for example β-carotene. Additionally, β-carotene is not closely regulated by a homeostatic mechanism (like some other carotenoids), making its plasma concentration more reflective of dietary intake. Despite α-carotene being abundant in the diet (like β-carotene), poorer correlations were observed. This may be attributable to various influencing factors. Firstly, the mixed food groupings described earlier may have diluted true intake. Secondly, food preparation and cooking techniques that impact α-carotene bioavailability may not have been captured adequately. Lastly, the portion size of the α-carotene rich vegetable food group (i.e. orange/red vegetables) was the only main vegetable class determined by assigning a median value for portion size rather than by self-selection. The literature reports that when subjects can select their portion size, correlation coefficients are typically higher. The methods of triads is a mathematical triangulation approach using comparisons between three different and independent measures of the variable being assessed to estimate a VC between each measurement method and the subjects’ estimated true habitual intake. This technique assumes that any errors associated with each method are independent of each other. The VCs for each carotenoid measured were larger than their respective correlation coefficients, suggesting that the triads method (utilising both FFQ and 7DFR data) is a more predictive technique for determining serum carotenoid concentrations than using a single dietary assessment method. Artificially high VCs may result from differences in assessment of carotenoid intake time frames, i.e. the FFQ and 7DFR being completed the week prior to plasma carotenoid (biomarker) collection. In our study the observed FFQ VCs of measured carotenoids were all moderate-to-high (ranging from =0.33–0.61) suggesting the FFQ is a relatively reliable tool for measuring carotenoid intake . The FFQ VCs of carotenoids vary considerably between studies, with many only presenting VCs for β-carotene, thus making comparisons difficult. For the limited studies that examined the same five carotenoids as our study, the observed VCs were wide and ranged from 0.19 to 0.84 in an Australian study, and 0.31 to 0.98 in two studies from the Americas. The VCs observed in our study were similar or smaller, which may be attributable to the differences in sample sizes, populations examined and cultural food preferences. As previously noted, 7DFR were used as a surrogate measurement for the gold standard weighed FR. The 7DFR VCs for all carotenoids, with exception of lutein/zeaxanthin, were stronger compared to the FFQ VCs. Similar trends have been observed for individual carotenoids in some studies, while others have highlighted a contrary position. Stronger VCs are typically expected for FRs due to there being a greater level of accuracy in the capture of true foods consumed and cooking methods; and less potential for overestimation, as compared with FFQs. When FFQ VCs were compared to biomarker VCs, the majority were smaller, except for lutein/zeaxanthin and total carotenoids. This trend is different to what has been observed in studies which report on a range of carotenoids. While our results are not typical, similar findings have been observed to ours in studies that reported results based on a single carotenoid, for example, Daures et al. reported β-carotene VCs for FFQ and biomarker as 0.39 and 0.85, respectively, while Burri et al. reported lycopene VCs for the FFQ and biomarker of 0.49 and 0.66, respectively. Many of the inconsistencies observed between the results in validation studies and in comparison to our study may be consequential of differences between the studies; utilisation of different FFQs, time frames assessed by reference methods and biomarker concentration may vary if there are differences in laboratory testing and/or the isomers measured. Of particular importance is the difference in time frame assessed of carotenoid intake for each measurement method within the current study. The FFQ measured intake over the preceding 12-months, the FR measured intake over 7-d, while carotenoid biomarkers likely represent the previous weeks to months of carotenoid dietary exposure. When making comparisons, it is desirable that each method assesses intake over the same time frame, this is particularly important for carotenoids as their intake is subjected to wide seasonal variation. This mismatch of time frames in this study may reduce potential for detection of statistically significant relationships, reduce predictive performance and underestimate true correlations which may have been observed within our study. Increasing the length of the reference method through the application of multiple 7DFRs (i.e. collected every 3 months over a 12-month time frame) would allow a more comparable level of habitual intake to dietary information collected in the FFQ, and also improved capture of seasonal effects. Despite this time frame limitation, FFQs offer advantages over 7DFR and biomarkers; they are easier to use, have reduced participant burden, ability to pick up on seasonal variation and be utilised within large populations, making them beneficial measurement tools. A key strength of the present study was the use of plasma biomarkers as an objective and independent measure of nutrient intake to validate FFQ estimated intakes and the use of the methods of triads which assists with correction of biases of correlated errors between dietary intake methods. This study is also one of very few which compares multiple dietary methods using a spectrum of carotenoids. Lastly, is the unique design of the FFQ which has a focus on the carotenoid-rich MedDiet pattern and assesses carotenoid-rich foods not often assessed by other FFQs (e.g. herbs and spices, condiments and mixed tomato containing dishes). Several limitations of the present study should be noted. First, the relatively small sample size of participants ( n 39) may have resulted in underpowering and difficulty in reliably detecting significant correlations. Other scientific literature suggests that a minimum desirable sample size for validation studies is between 50 (when using biomarkers) and 100 participants. In addition, females were under-represented in our sample, which limits the generalisability. This is a common issue in clinical trials of CHD. Second, the assessment of reproducibility was not investigated due to the nature of the data collection, which utilised baseline data from the AUSMED Heart Trial. Third, was the use of the USDA database, which is based on the US food supply and may not accurately reflect nutrient composition in the Australian food supply, and thus may have reduced the likelihood of detecting relationships. Fourth, there is debate whether a single blood measurement can reliably detect serum biomarker concentrations. due to individual variability and daily fluctuations. Fifth, relates to the order of completion of the FFQ and 7DFR. While the 7DFR and CMST were instructed to be completed 1 week prior to the study appointment, no instruction was provided regarding the order of completion. This is a potential limitation of the study as ideally the test instrument (i.e. FFQ) should be administered prior to the reference method (i.e. 7DFR) in order to prevent learned behaviours and biased responses. Last, mis-reporting of intake by participants using the FFQ can be impacted by social desirability bias or recall bias (memory) which can reduce accuracy of reported intake in comparison to objectively measured intakes (i.e. biomarkers). Further research is warranted using increased sample size, assessment of reproducibility and exploring use of alternative biomarkers (including skin and adipose tissue), which may provide a more suitable prediction of longer-term dietary carotenoid intake compared to plasma carotenoids. Additionally, potential FFQ modifications to improve accuracy of dietary carotenoid measurement include: expansion of groupings of similar foods to individual foods (this must be balanced against the desired FFQ length), and separation of the orange/red vegetable food groups alongside provision of photograph portion references to enable selection of portion size. In conclusion, this study demonstrated that the CMST-FFQ-v2 was able to estimate carotenoid intakes with moderate confidence for most of the measured carotenoids within this Australian cardiology cohort. Significant correlations observed between FFQ estimated intake of β-carotene, β-cryptoxanthin, and total carotenoid with plasma biomarkers and the moderate-strong FFQ VCs observed for all measured carotenoids. There was however less confidence in the FFQ’s ability to accurately measure intakes of α-carotene and lycopene due to the poorer correlations and VCs observed. Addressing limitations, making suggested future revisions for the FFQ and conducting a larger-scale investigation, may assist to strengthen the ability of the FFQ to accurately measure dietary carotenoid intake. Kucianski et al. supplementary material 1 Kucianski et al. supplementary material Kucianski et al. supplementary material 2 Kucianski et al. supplementary material
What ethical conflicts do internists in Spain, México and Argentina encounter? An international cross-sectional observational study based on a self-administrated survey
8446c440-e662-4f11-816a-62af1a2f098b
11531189
Internal Medicine[mh]
Different studies on ethical conflicts in clinical practice show variability in conflicts across countries and ethno-cultural environments . Although ideally the goals of medicine are universal and the technological means of daily use in clinical practice may be similar in many countries, differences in the healthcare system, professional culture and priority values of a country can significantly condition the types of ethical problems that appear, as well as the way to address them. Likewise, responses to problems are conditioned by local or personal values . Local values are prioritized in a culture through its ethical norms, often transferred into legislation . Current societies are increasingly multicultural, very much in line with the phenomenon of globalization, which tends to homogenize and reduce differences between societies . However, the tradition and the predominant cultural characteristics in each country continue to have a prevailing character in the way of structuring healthcare systems and in the way of understanding and exercising the clinical relationship . In this sense, the existence of a bioethics common to the Mediterranean area has been postulated (applicable, among others, to Spain or Italy), which would have a more principlist character and with influence in Latin America , in contrast to other Anglo-Saxon bioethics, which tends to be comparatively more utilitarian and pragmatic. However, other authors consider that it is increasingly difficult to find regional differences in the way of approaching problems in bioethics , so that perhaps Mediterranean and Anglo-Saxon bioethics could be subsumed in one Western bioethics , possibly different from other Eastern bioethics . Comparing bioethical conflicts between countries can help identify specific regional problems and strengths, areas for improvement and models that may serve as a guide . To our knowledge, the differences in clinical bioethics between the Mediterranean (Spanish) and Latin American cultures have not been adequately analyzed to date. In particular, Latin American bioethics has influences from North America (Anglo-Saxon bioethics) and Western Europe (Mediterranean bioethics), but it also includes many particular characteristics, some associated with social justice and indigenous populations . For this reason, our main objective in this work was to compare the main ethical conflicts that internists have in Spain and in two Latin American countries, Mexico and Argentina. As secondary objectives, the frequency, importance, the difficulty involved in ethical conflicts and satisfaction in their resolution were analyzed. The specialty of Internal Medicine has been chosen for its holistic and comprehensive vision of the patient, and because it is a specialty in which many of the conflicts in clinical bioethics converge . Study design This is an observational and cross-sectional study, through a self-administered, voluntary and anonymous opinion survey, distributed through the Society of Internal Medicine of the three countries of the study. First, the survey was distributed in Spain to members registered in the National Society for Internal Medicine through an online platform (between June and July 2017). Subsequently (between October and December 2017), we considered the added interest that a comparative analysis with other countries would entail, and expanded the scope of our study to include Latin American countries. We contacted members of the Society of Internal Medicine of Mexico and Argentina to also carry out the study in those countries, and then the survey was distributed in Argentina and Mexico through their respective National Societies of Internal Medicine, also through an online platform. The distribution and data collection methodology in Argentina and Mexico was similar to that carried out in Spain. Preparation of the questionnaire The questionnaire was prepared by a multidisciplinary team made up of internists, experts in bioethics and research methodology. To prepare it, two bibliographic searches were carried out: the first to determine which were the main ethical conflicts described by internists; the second, on the questionnaires used to explore the presence of said conflicts. Based on these searches, a draft of the survey was written. After this, a trial was carried out in Spain with 10 physicians specializing in Internal Medicine and with 10 residents of the same specialty to optimize the writing and understanding of the questionnaire. Finally, the questionnaire was reviewed by the study researchers in Spain, Mexico and Argentina, to avoid cultural biases and ethnocentrism. The questionnaire is available in supplementary file (Questionnaire 1). Variables The survey scored the frequency with which professionals identify different ethical conflicts and their relevance in clinical practice, using a scale from 0 to 5. In the study, 19 types of ethical conflict were evaluated. Certain conflicts were removed from other similar questionnaires reviewed in the published scientific research: assisted suicide and euthanasia because they are illegal in Mexico and Argentina; abortion, reproductive problems, genetic counseling and transplants, as they are rare conflicts in internal medicine. Questions about patients’ caregivers refer to “family members“, because in the studied contexts, caregivers are usually family members . To explore the frequency, difficulty and satisfactory resolution of ethical conflicts, a Likert scale (1–4) was used. Variables were also collected about the professionals surveyed, including demographic data (age, gender), number of years of professional experience, position within the institution, scope of professional activity, training in bioethics, and hospital management model (public or other). Statistical analysis The qualitative variables are described using frequency tables and the quantitative variables with the mean and standard deviation. For the analysis of independence between non-dichotomous qualitative and quantitative variables, an analysis of variance (ANOVA) was carried out and a Student’s t test was carried out between dichotomous variables, a X 2 test was carried out between qualitative variables and the Pearson r correlation coefficient was carried out between quantitative variables. The level of significance was p < 0.05. The data were recorded in an Excel ® document (Microsoft Co., Redmond, WA, USA) and were exported and analyzed using SPSS Statistics 22 ® (IBM, Armonk, NY, USA). Ethical aspects The study complies with the ethical research norms and standards reflected in the Declaration of Helsinki of the World Medical Association and in the Oviedo Convention relating to human rights and biomedicine. All respondents consented to participate in the study and were aware of the objectives of the study. All responses were anonymous and treated with the utmost confidentiality, in accordance with current legislation . The research was approved by the committee of the Francisco Vallés Clinical Ethics Institute. Before completion in Mexico, the study was approved by the Mexican Society of Internal Medicine, the institution that assessed the ethical aspects of the study. This is an observational and cross-sectional study, through a self-administered, voluntary and anonymous opinion survey, distributed through the Society of Internal Medicine of the three countries of the study. First, the survey was distributed in Spain to members registered in the National Society for Internal Medicine through an online platform (between June and July 2017). Subsequently (between October and December 2017), we considered the added interest that a comparative analysis with other countries would entail, and expanded the scope of our study to include Latin American countries. We contacted members of the Society of Internal Medicine of Mexico and Argentina to also carry out the study in those countries, and then the survey was distributed in Argentina and Mexico through their respective National Societies of Internal Medicine, also through an online platform. The distribution and data collection methodology in Argentina and Mexico was similar to that carried out in Spain. The questionnaire was prepared by a multidisciplinary team made up of internists, experts in bioethics and research methodology. To prepare it, two bibliographic searches were carried out: the first to determine which were the main ethical conflicts described by internists; the second, on the questionnaires used to explore the presence of said conflicts. Based on these searches, a draft of the survey was written. After this, a trial was carried out in Spain with 10 physicians specializing in Internal Medicine and with 10 residents of the same specialty to optimize the writing and understanding of the questionnaire. Finally, the questionnaire was reviewed by the study researchers in Spain, Mexico and Argentina, to avoid cultural biases and ethnocentrism. The questionnaire is available in supplementary file (Questionnaire 1). The survey scored the frequency with which professionals identify different ethical conflicts and their relevance in clinical practice, using a scale from 0 to 5. In the study, 19 types of ethical conflict were evaluated. Certain conflicts were removed from other similar questionnaires reviewed in the published scientific research: assisted suicide and euthanasia because they are illegal in Mexico and Argentina; abortion, reproductive problems, genetic counseling and transplants, as they are rare conflicts in internal medicine. Questions about patients’ caregivers refer to “family members“, because in the studied contexts, caregivers are usually family members . To explore the frequency, difficulty and satisfactory resolution of ethical conflicts, a Likert scale (1–4) was used. Variables were also collected about the professionals surveyed, including demographic data (age, gender), number of years of professional experience, position within the institution, scope of professional activity, training in bioethics, and hospital management model (public or other). The qualitative variables are described using frequency tables and the quantitative variables with the mean and standard deviation. For the analysis of independence between non-dichotomous qualitative and quantitative variables, an analysis of variance (ANOVA) was carried out and a Student’s t test was carried out between dichotomous variables, a X 2 test was carried out between qualitative variables and the Pearson r correlation coefficient was carried out between quantitative variables. The level of significance was p < 0.05. The data were recorded in an Excel ® document (Microsoft Co., Redmond, WA, USA) and were exported and analyzed using SPSS Statistics 22 ® (IBM, Armonk, NY, USA). The study complies with the ethical research norms and standards reflected in the Declaration of Helsinki of the World Medical Association and in the Oviedo Convention relating to human rights and biomedicine. All respondents consented to participate in the study and were aware of the objectives of the study. All responses were anonymous and treated with the utmost confidentiality, in accordance with current legislation . The research was approved by the committee of the Francisco Vallés Clinical Ethics Institute. Before completion in Mexico, the study was approved by the Mexican Society of Internal Medicine, the institution that assessed the ethical aspects of the study. In total, 762 internists participated, 261 (34%) from Spain (SPA), 154 (20%) from Argentina (ARG) and 347 (46%) from Mexico (MEX). The sociodemographic characteristics of the samples are shown in Table . In ARG the average age (36 years) is lower than in ESP (45 years) and MEX (48 years). In ARG there are fewer men (38%) than in SPA (53%) and MEX (67%), and more residents (43%; in SPA 15% and in MEX 2%). In ARG, more internists work in public health (92%; in SPA 82% and in MEX (56%) and there are also more internists who have received university training in bioethics (70%; in MEX 38% and in SPA 35%). All these differences are statistically significant ( p < 0.05). Inpatient activity is predominant in the three countries (SPA 95%, ARG 97%, MEX 82%), followed by outpatient activity (SPA 50%, ARG 38). %, MEX 73.5%), but the majority combine inpatient and outpatient activity (SPA 52%, ARG 51%, MEX 63%). 70% of internists from SPA and 72% from ARG encounter ethical conflicts frequently or almost always in their healthcare practice ( p > 0.05), while in MEX 48% do so ( p < 0.05). Conflicts make care activities difficult almost never or rarely for 60% of ESP and ARG internists, while in MEX this is true for 82% ( p < 0.05). For 68% of internists in SPA and for 62% in ARG, the reported degree of difficulty of ethical problems is moderate or very high, while in MEX 35% reported the same ( p < 0.05). 92.4% of respondents from SPA and 92% from MEX reported having resolved ethical problems satisfactorily frequently or almost always, while in ARG 58% did so ( p < 0.05). The average degree of satisfaction in MEX is 4 (DS ± 0.86; p < 0.05), in SPA (0–5) it is 3.5 (DS ± 0.79) and in ARG it is 3.3. (DS ± 0.93; p < 0.05). Table . In the three countries, women, those who work in public hospitals and the youngest (under 41 years of age and less than 21 years of professional practice) reported more ethical conflicts in their healthcare activity and more difficulty when facing them ( p < 0 0.05). Women and those who work in public hospitals reported resolving them less satisfactorily ( p < 0.05) (Table ). The most frequent and relevant ethical conflicts reported in the three countries are described in Table . In SPA and ARG the three most frequent conflicts coincide. In the three countries (Table ) women, residents and those with formal training in bioethics reported encountering more of the ethical problems described in Table and give them more importance ( p < 0.05). On the other hand, those who reported encountering other ethical conflicts are younger (they have less professional experience) and report less satisfaction with the way the conflicts are resolved. In the three countries there is a directly proportional relationship between the frequency with which ethical conflicts are encountered and the frequency with which conflicts make daily healthcare practice difficult ( r = 0.53 − 0.43) and also with the reported degree of difficulty of conflicts ( r = 0.47 − 0.35). That is to say, those who report encountering more ethical conflicts, also report that these conflicts make their healthcare activity more difficult, more often. Spanish, Mexican and Argentine internists identified the most frequent and relevant ethical conflicts as those around the end of life, especially those related to withholding and withdrawing life-sustaining treatment (WW). WW is the most frequent conflict in Spain and Argentina and the second in Mexico (the first among those who work in an inpatient setting). In addition to WW, other ethical issues at the end of life also stand out, such as palliative treatment or no cardiopulmonary resuscitation orders, which are a form of WW. When comparing the results between the three countries, there is a lot of similarity between the most prominent conflicts in Spain and Argentina. We think that there are several possible causes that may explain these results. One is the cultural similarity between Spain and Argentina. According to the Kogut and Singh index of cultural distance, which measures cultural differences between countries based on six dimensions , the cultural difference between Spain and Argentina is smaller than between Spain and Mexico . There are also significant cultural differences between Argentina and Mexico . The reasons underlying cultural similarities could fundamentally obey three dimensions of Hofstede’s model : power distance (PDI, defined as the extent to which the less powerful members of institutions and organisations within a country expect and accept that power is distributed unequally), individualism (IDV, the degree of interdependence a society maintains among its members) and masculinity (MAS). Regarding the latter dimension, it must be clarified that Hofstede’s model ranks a cultural system’s driving values on a scale where high, or masculine scores, are taken to indicate a high societal value of competition, achievement and economic or workplace success (defined as high ranking in a hierarchical order, or “best in class“) that is inculcated in early stages of schooling and is a driving force of organisational life. Low, or feminine scores according to Hofstede’s model, are interpreted as characteristic of cultural systems where caring for others has a high societal value and success is defined in terms of quality of life. Hofstede summarizes: “The fundamental issue here is what motivates people, wanting to be the best (Masculine) or liking what you do (Feminine)” . There is a higher PDI in Mexico (81) compared to Spain (57) and Argentina (49), which implies more paternalistic and hierarchical attitudes. This can condition the clinical relationship. In Mexico there is a lower IDV (30) -or greater collectivism- compared to Spain (51) and Argentina (46), which is related to the search for belonging to the group and the tendency to obey, to avoid entering into conflict and with high-context communication. Finally, in cultures with a higher MAS, such as Mexico (69; Argentina 56 and Spain 42), negotiation and the capacity for integration is lower . For internists in Spain and Argentina, ethical problems related to the end of life are more important than in Mexico. An explanation for this data is the type of activity of the internists, since among Mexican internists who work in hospitalization, WW is the most frequent conflict (it is the eighth most frequent among those who work in the outpatient setting). Another aspect that may influence this result is the greater acceptance of death in Mexico , which in turn is correlated with lower life expectancy or the greater cultural presence of religion. In Mexico, death and the treatment of the dying are less taboo topics than in other countries . We must also note other factors that can lead to fewer ethical problems at the end of life in Mexico, such as the underdevelopment of palliative care units in Mexico . The importance in our study of WW as an ethical conflict is in line with other studies . WW decisions are complex for many reasons: due to their variety, difficulty and the lack of adequate training. Work carried out on a similar sample showed that only 25% of Spanish internists have an adequate knowledge of what WW is . The same happens in Mexico, where WW is mislabeled as “passive euthanasia”, being rejected by 44% of residents and by 47.9% of medical students . The confusion in Mexico between WW and the misnamed “passive euthanasia” also exists with palliative sedation . It has been proposed that the term WW can cause rejection in Mexico for religious reasons , and it is of note that the idea of a “medical miracle” (which would prevent withdrawing life support measures) is still strongly rooted in Mexico . In Argentina, the main barrier to WW is legal: 36% consider that it lacks adequate legal support, and only 15% consider that it is an ethical issue . After the ethical problems at the end of life, the most frequent and relevant group of ethical problems, especially in Mexico, are those linked to the clinical relationship: doctor-patient communication (the most frequent in Mexico), conflicts with family members or problems with confidentiality. It is possible that outpatient activity, which is significantly more common in Mexico, predisposes for more conflicts related to communication. However, problems with communication are the most relevant in Mexico, both for who work in inpatient as well as in outpatient settings. We must note that in Mexico there is greater distance of power and that the clinical relationship is more paternalistic and hierarchical, which causes more dependent behaviors with respect to authority . Paternalism as a source of ethical problems has also been described in Mediterranean countries such as Italy or Greece, where patients report not always wanting to know the truth of their clinical situation . The paternalistic clinical relationship results in submission to the doctor as a form of respect and gives more value to the family in decision making . In Argentina, the autonomist influence exerted by doctors trained in the United States is evident . In Spain, work has been done on patient autonomy for several decades, with extensive legislative development when compared to Mexico or Argentina . However, Spanish internists report more conflicts with families. It is probably due to the lesser distance from the power in Spain , which leads to more horizontal clinical relationships. There are not, however, exempt from ethical problems, increasing confrontation due to differences in criteria between doctors and patients, or their relatives . Problems with confidentiality, most prominent in Mexico and Argentina, may also have a cultural basis . More collectivist societies (markedly Mexico) and with low tolerance for uncertainty are more concerned about privacy . It is also important to note that conflicts with confidentiality, although they occur in all clinical settings , are more linked to less reliable health systems, with conditions that cause social stigma (HIV, among others) and among marginalized communities and with the occurrence of immigration, domestic violence, abortion in Argentina, among others . The third group of problems is more directly related to respect for the patient’s autonomy. In Mexico and Argentina, conflicts with informed consent stand out, while in Spain capacity (assessment of capacity and decision-making in people without capacity) and rejection of procedures stand out. Informed consent, which can be considered the explicit putting into practice of respect for autonomy in a clinical setting , can cause greater conflict in less autonomous cultures . In Mexico, for example, there are official recommendations that emphasize not being “too explicit” with the patient, in case the information generates “distress, depression or fear” . Given that in Spain patient autonomy is considered more valuable, more problems related to decision-making capacity appear, since this is a prerequisite to be able to exercise autonomy. It is of note that in Spain life expectancy is 7–8 years longer than in Mexico and Argentina , and therefore there are more patients with cognitive impairment and with loss of decision-making capacity . Finally, in the three countries there are few conflicts arising from advance directives, undoubtedly because their implementation is very rare in the countries of the study . Finally, a varied group of problems appears: conflicts with colleagues (more frequent in Spain), with other professional groups, with the distribution of resources and cultural conflicts. In last place are conflicts with third parties, due to mistreatment and favorable treatment of patients. Regarding conflicts of interest (for example, with the pharmaceutical industry or with public administration), it is striking that they are not highlighted more, because they are considered a serious problem when studied specifically . In all three countries, it has been described that the pharmaceutical industry unduly influences prescription in a significant number of clinicians . It has been postulated that there may be a cognitive bias in clinicians regarding the influence of the pharmaceutical industry on their decisions , minimizing its importance. The findings of studies carried out in other countries have common aspects with ours. In the United States, conflicts at the end of life were also identified as the most frequent and difficult in routine clinical practice, while those in the clinical relationship are less common. Justice conflicts also stand out . In a multicenter European study (Italy, Norway, Switzerland, United Kingdom), conflicts related to autonomy in decision-making predominated (94.8%), followed by disagreements between caregivers (81.2%) and conflicts related with WW and with lack of CPR orders (79.3%) . Conflicts due to cultural or religious reasons, as in our work, are rare. Seven out of ten internists in Spain and Argentina reported encountering ethical conflicts in their clinical practice frequently or almost always, while in Mexico less than half (48%) did so. On the other hand, in Spain and Argentina ethical problems frequently or almost always made clinical practice difficult for four out of ten, more than double than in Mexico (18%). When asked about the degree of difficulty of ethical problems, it was moderate or high for more than 60% of internists in Spain and Argentina, and only for 35% of Mexicans. Therefore, as internists encounter more ethical conflicts, more find it difficult to resolve them, as is the case in Spain and Argentina. These findings may be due to a certain “axiological blindness”: if ethical conflicts are not identified (as was more frequently reported in Mexico), one is not aware of the problems associated with said conflicts. In the Spanish and Argentine cohort, healthcare professionals with formal training in bioethics (and, therefore, who could be more sensitized) encountered more conflicts and found them difficult more frequently. All of this would reinforce the “Dunning-Kruger” effect: people with a lack of knowledge and skills are more likely to overestimate themselves and not perceive their decisions as wrong . For this reason, we consider it essential to increase training in bioethics in ordert o raise awareness among clinicians and increase the detection and engagement with ethical problems . The same applies to women (more sensitivity) and public workers (more solidarity), which could explain why such findings (encountering more conflicts and more participants find it difficult to resolve) in countries with less MAS such as ARG or SPA. Our study has been carried out in countries that share a language and historical influences , while their healthcare systems , legislative and economic systems show certain differences. In general, the data from internists in Argentina and Spain are similar: they identify the same ethical problems (the same typology and with the same frequency) and consider them difficult to a similar degree. However, professionals from both countries differ regarding their satisfaction when solving them. In Spain and Mexico the satisfactory resolution is higher: 92% resolve ethical conflicts satisfactorily frequently or almost always, while in Argentina only 58% do so. Professional experience (which was longer in Spain and Mexico) is a possible explanation for these data. In fact, the subgroup of residents (more inexperienced) in Argentina is the one with the least satisfaction, whereas those with more than 20 years of experience were the most satisfied. This study has the limitations inherent to studies carried out with self-administered questionnaires and closed answers. The degree of comprehension of the questions and the reasons that motivate the answers are unknown, and we cannot be sure that the participants have limited their interpretation of conflicts to the examples being cited in the questionnaire. There is also the possibility of a selection bias: that the respondents are more sensitive or interested in the subject studied. Furthermore, the sample may not be representative because the sample size calculation was not conducted and the three samples present differences in their size (Mexico’s is larger) and sociodemographic characteristics, and they only represent a proportion of the internists in each country. The time lapse since data collection is significative, and the impact of the COVID-19 pandemic, the new regulation on euthanasia in Spain and advances in artificial intelligence and telemedicine are not being evaluated in these results. Regarding its strengths, our study has a large sample size, the largest carried out with these characteristics to date. Furthermore, the methodology when developing the survey has been exhaustive and a sufficient number of surveys has been obtained in each country to be able to draw robust conclusions. Our findings suggest that the main ethical conflicts that internists in Spain, Argentina and Mexico face are related (in order) to the end of life, to the clinical relationship and to the patient’s autonomy. WW is the most frequent conflict in Spain and Argentina and the second in Mexico (the first among those who work in the inpatient setting). There is a lot of similarity between the most prominent conflicts in Spain and Argentina. Seven out of ten internists in Spain and Argentina report encountering ethical conflicts in their clinical practice frequently or almost always, while in Mexico less than half do so. In Spain and Argentina, ethical problems are considered more challenging and, in addition, they more commonly negatively influence daily clinical practice: four out of ten internists in Spain and Argentina reported that ethical conflicts frequently or almost always made their clinical practice more difficult, more than double than in Mexico. In Argentina, internists are less satisfied with the way ethical problems are resolved. To explain these differences, we have proposed different socio-cultural factors, among others: a positive assessment of death would decrease end-of-life ethical issues, paternalism would increase conflicts in the relationship with the patient, individualism would increase conflicts in the relationship with the patient’s family and decrease privacy conflicts and lower masculinity index, public organization of the healthcare system and formal training in bioethics would increase the frequency of encountering ethical conflicts, as well as finding them more often difficult. Below is the link to the electronic supplementary material. Supplementary Material 1
American Society of Hematology: building a comprehensive minority recruitment and retention professional program
c33338cf-57ec-49dd-9447-c0ead7fd6454
11697047
Internal Medicine[mh]
The diversity of the scientific and health care workforce in the United States is not representative of the population that it serves. It has been shown that diversity in the physician workforce improves patient health and treatment outcomes. Individuals from backgrounds underrepresented in medicine are more likely to serve communities of color, which in turn increases health care access to socially disadvantaged groups, decreases health care disparities, and improves diversity in clinical trials. Patients treated by physicians from the same race or ethnicity report improved satisfaction with treatment and improved patient-provider communication. In 2003, the Institute of Medicine recognized the need to diversify the medical workforce, noting that diverse clinicians would have a positive impact on patient-physician communication and engagement with racial and ethnic minority populations, and ultimately improve health care inequities. African Americans constitute 13.6% of the US population, yet they represent only 4.1% of hematology-oncology trainees. Similarly, Hispanic/Latin/o/a/x individuals represent 18.9% of the US population but constitute only 5.7% of hematology-oncology trainees. Furthermore, for historically underrepresented minority groups (Black/African American, Hispanic/Latin/o/a/x, Native American/Alaskan Natives, Pacific Islander, Inuit, or First Nation Peoples), there is considerable attrition at each educational level, representing the challenging path to completing medical education and training to become academic faculty. Underrepresented minority students with science, technology, engineering, and math majors are more likely to transfer to nonscience majors before graduation than their White counterparts. , In fact, in 2022, only 9% of doctoral recipients in science/engineering fields identified as Hispanic/Latin/o/a/x, 6% identified as Black, and 0.5% identified as Native American/Alaskan Natives. Additionally, Black and Hispanic doctoral students in science majors are less likely to receive financial stipends compared with their White counterparts. As a result, 81% of Black or Hispanic doctoral students borrowed over $40 000 in loans to obtain their graduate education, compared with only 6% of White doctoral students. In recent years, there has been a renewed focus on the barriers that continue to limit diversity and equity, with a concomitant recognition of the structural, racist policies and systems that exist as barriers to success. Programs promoting diversity and inclusion have existed for decades. However, they have not demonstrated significant success in changing the demographics of medicine and science, with some data suggesting that they can be counterproductive. In fact, despite prolonged efforts to create pathway programs aiming at improving the diversity of the physician workforce, the majority of medical school graduates continue to self-identify as White. , The American Society of Hematology (ASH) was a pioneer in responding to the call to action from the Institute of Medicine and developed programs meeting the needs of racial and ethnic populations that are underrepresented in medicine to increase the diversity of physicians and scientists in hematology. ASH demonstrated early and sustained commitment to promoting diversity and equity at a time when it was rare for societies to engage in this work. Here, we evaluate the success of ASH’s Minority Recruitment Initiative (MRI) program and highlight areas for improvement. The governance structure of ASH includes an executive committee and 14 standing committees that recommend policies, programs, and actions to the executive committee. Each Committee has its own mandate. The Minority Affairs Committee was created in 2003 as an ad hoc committee with the mandate to improve diversity in the hematology workforce. To address this mandate, the committee launched the MRI. Subsequently, ASH created a summer research award for minority medical students, the Minority Medical Student Award Program. Within 2 years of launching the initiative, ASH expanded the MRI’s outreach to include the retention of early career faculty. This was accomplished by partnering with the Harold Amos Medical Faculty Development Program (AMFDP), a national program of the Robert Wood Johnson Foundation that was created in 1983 to increase the number of health care professionals from historically disadvantaged backgrounds remaining in academic medicine. Through this partnership, ASH began to financially support 1 to 2 early career (ASH-AMFDP) awardees per year . Eight years after its formation, the ad hoc Minority Affairs Committee transitioned to a permanent standing committee, renamed the Committee on Promoting Diversity. In that same year, ASH recognized the additional need to support doctoral scientists and created the Minority Graduate Student Abstract Achievement Award. As the MRI grew, ASH was responsive to feedback from volunteers, mentors, and awardees and enhanced the initiative as needed. For example, in 2016, ASH created a national honorific award to recognize leaders who promote and embrace diversity. In 2018, the ASH Ambassador Program was created to help improve the geographical diversity of applicants. By 2022, 3 additional awards had been developed, resulting in an unbroken longitudinal pathway of support for clinical and doctoral scientists . All MRI awards include mentorship and a paid research opportunity. These key components have been shown to improve diversity in science fields. Studies have reported minority students who participate in mentoring programs have lower attrition, higher grade point averages, and increased self-efficacy. Furthermore, career development mentors have been shown to help underrepresented students navigate challenges, such as the unwelcoming climates that are often present in higher education. Finally, minority students who participate in research opportunities, particularly paid internships, are more likely to be retained in science, technology, engineering, and math careers. Awards pathway Currently, the MRI encompasses 6 awards and applicants apply for an award based on their degree and/or career level . The longitudinal pathway for clinical scientists (physicians or physicians in training includes 4 awards): (1) the medical student award, Minority Medical Student Award Program; (2) the resident award, Minority Resident Hematology Award Program; (3) the fellow award, Minority Hematology Fellow Award; and (4) the early career award, ASH-AMFDP. The longitudinal pathway for doctoral scientists includes 3 awards: (1) the graduate student abstract achievement award, Minority Graduate Student Abstract Achievement Award; (2) the graduate student award, Minority Hematology Graduate Award; and (3) the fellow award, Minority Hematology Fellow Award. 1. The medical student award was the inaugural award developed by the MRI in 2004. This award has been expanded over time and currently includes opportunities for participation throughout medical school. A critical component of this award is a dual mentorship structure that pairs each awardee with both a research mentor and a career development mentor. The career development mentor’s role is to support the medical student by giving advice to navigate challenges or difficult environments, encourage them to apply for additional ASH awards, and provide holistic support through medical school and beyond. A financial stipend is provided to the student, and additional funds are provided to support the research project. After their first experience, medical student awardees are encouraged to apply for an additional experience to support ongoing mentorship and research. 2. The resident award was developed as a “next step” on the longitudinal pathway after the medical student award. Similar to the medical student award, the resident award also includes the dual mentorship structure. The goal of this award is to provide research support for a hematology-related project that is conducted part-time (320-480 hours) over the course of 1 year. A stipend is provided to the student, and additional funds are provided to support the research project. After their initial award, resident awardees are encouraged to apply for an additional experience to support ongoing mentorship and research. 3. The fellow award is the only award open to both medical and doctoral trainees. The goal of the fellow award is to provide protected time for clinical fellows or postdoctoral graduate students to generate sufficient expertise or preliminary data to be competitive when applying for future awards. The fellow award provides salary support as well as funds to support a hematology research project for a 2- or 3-year period. Due to the nature of the fellow award, it is not renewable. 4. The early career award allows a physician committed to a career in academic medicine protected time to conduct research. This award provides 4 years of salary support as well as funds to support a hematology research project. Due to the nature of the early career award, it is not renewable. 5. The graduate student abstract achievement award is the only award that provides recognition for research already conducted. This award recognizes doctoral students who are authors of abstracts that have been accepted for an oral or poster presentation at the ASH annual meeting. Initially this award was given as a travel stipend to encourage attendance at the ASH annual meeting; however, the graduate student abstract achievement award is currently given as a stipend in recognition of meritorious science. Previous awardees are encouraged to reapply for the award in subsequent years if the eligibility criteria are met. 6. The graduate student award encourages doctoral students to pursue a career in academic hematology. The graduate student award provides 2 years of funding for salary support, the hematology research project, training-related expenses, and travel to the ASH annual meeting. Due to the nature of the graduate student award, it is not renewable. MRI awards eligibility All trainee awards require the research mentor to be an ASH member and for the research to be conducted in the United States or Canada. For the medical student and resident awards, if the applicant is unable to identify a research mentor at their home institution, then they can request to be “matched” with an ASH research mentor at another institution. The matching process is performed by ASH member volunteers and ASH staff. The early career award (ASH-AMFDP) eligibility criteria differ from the trainee awards. Unlike the trainee awards, the early career award does not require the research mentor to be an ASH member, and it only supports research conducted in the United States. Additional eligibility criteria and respective award benefits can be found at https://www.hematology.org/awards . Recruitment strategy In the early years of the MRI, ASH advertised to dean’s and financial aid offices of accredited allopathic and osteopathic medical schools. Currently, ASH uses a variety of methods to encourage applicants to apply. Potential applicants and mentors are exposed to opportunities through ASH communication channels including social media (ASH website, X, Facebook, Instagram, LinkedIn, and YouTube), hematology news publications and emails to the ASH membership database. ASH also recruits from science, technology, and mathematics conferences focused on minority clinicians and scientists. ASH Ambassadors are encouraged to share opportunities with potential applicants at their home institutions. Additionally, prior promising applicants who did not receive funding are contacted and encouraged to reapply for the upcoming funding cycle. Nonetheless, the primary source of recruitment is from previous MRI mentors and past/current participants as they become unofficial program ambassadors. Active award recipients present their research experience during the annual meeting at a symposium. This event is the largest platform for recruiting mentors with >300 audience members, including National Institutes of Health (NIH) program officers, academic program chairs/chiefs, past awardees, and the president of ASH. Study section Awards are competitively selected through a rigorous review process modeled on the NIH study section and led by ASH member volunteers with relevant scientific expertise and understanding of program goals. This process includes adherence to the ASH conflict-of-interest policy. The review considers multiple aspects of each application including feasibility, innovation, and significance of the research proposal, the potential of the applicant, personal interest in hematology, likelihood of retention in the field of hematology, and the mentor/mentorship plan. The mentor’s demographic background is not part of the review discussion. Eligible applicants applying for a second year of funding are required to demonstrate productivity during their first year of funding, completion of all prerequisites from the prior cycle, and maintain the support of their research mentor to be competitive for additional funding. Based on the American Association of Medical Colleges endorsement of holistic review as an effective strategy to recruit diverse applicants, , most study sections use holistic review to allow for the consideration of experiences and personal attributes in addition to traditional metrics. Members of the study section are trained in the process. A driving criterion is the applicant’s likelihood of retention in the field of hematology as reflected in the personal statement and letters of recommendation. The “distance traveled” is also considered, recognizing that there may be distinct challenges that applicants have overcome and understanding that all persons are not given the same opportunities to thrive. Training members of the study section in holistic review has evolved over time. Initially, the chair of the study section would describe the goal at the beginning of the study section and as needed during the study section. Currently, study section reviewers are emailed a short 5-minute video before the preliminary review process. The video not only emphasizes the goal of holistic review but also describes the impact of implicit bias. Four awards (medical student, resident, fellow, and graduate student award) are decided over a 2-day study section. The graduate student abstract achievement award is decided separately during a half-day study section. The graduate student abstract achievement award study section uses the NIH scoring system; however, reviewers do not incorporate a holistic review of the application. Applicants are evaluated on their research potential, leadership, interest in hematology, and quality of their submitted abstract. The early career award selection is a multistep process. Applicants submit their research experience, career objectives, personal references, and a mentored training plan. Semifinalists are then selected for interviews that allow for the applicants to fully describe their research interests, institutional resources, and research environment. Funding In addition to funding received from ASH, ASH donors generously contribute to the ASH Foundation, and 100% of funds designated for MRI are used to extend the reach of the initiative. Metrics of success The short-term success of the medical and graduate student MRI awards was evaluated by comparing MRI graduation percentages with the national estimates of minority student graduation percentages. Short-term success was focused only on the medical student (2004-2018) and graduate student abstract achievement (2011-2017) awards because the other trainee awards were recently initiated, and recent awardees have not had time to graduate from their training programs. Long-term success was determined by continued engagement with ASH after their award and/or retention into hematology. Descriptively, continued engagement with ASH was evaluated for all MRI awardees. Retention in hematology was defined as being board eligible or board certified in hematology or hematology-oncology. This metric was evaluated for medical student (2004-2014 cohorts to allow time for the completion of education and fellowships) and early career, and it was compared with the national estimate of minority hematology-oncology faculty in academia. Data collection Data related to graduation and specialty areas was obtained online from publicly available information that was accessed from June to August 2023. Data related to engagement with ASH were obtained from ASH databases through November 2023. Currently, the MRI encompasses 6 awards and applicants apply for an award based on their degree and/or career level . The longitudinal pathway for clinical scientists (physicians or physicians in training includes 4 awards): (1) the medical student award, Minority Medical Student Award Program; (2) the resident award, Minority Resident Hematology Award Program; (3) the fellow award, Minority Hematology Fellow Award; and (4) the early career award, ASH-AMFDP. The longitudinal pathway for doctoral scientists includes 3 awards: (1) the graduate student abstract achievement award, Minority Graduate Student Abstract Achievement Award; (2) the graduate student award, Minority Hematology Graduate Award; and (3) the fellow award, Minority Hematology Fellow Award. 1. The medical student award was the inaugural award developed by the MRI in 2004. This award has been expanded over time and currently includes opportunities for participation throughout medical school. A critical component of this award is a dual mentorship structure that pairs each awardee with both a research mentor and a career development mentor. The career development mentor’s role is to support the medical student by giving advice to navigate challenges or difficult environments, encourage them to apply for additional ASH awards, and provide holistic support through medical school and beyond. A financial stipend is provided to the student, and additional funds are provided to support the research project. After their first experience, medical student awardees are encouraged to apply for an additional experience to support ongoing mentorship and research. 2. The resident award was developed as a “next step” on the longitudinal pathway after the medical student award. Similar to the medical student award, the resident award also includes the dual mentorship structure. The goal of this award is to provide research support for a hematology-related project that is conducted part-time (320-480 hours) over the course of 1 year. A stipend is provided to the student, and additional funds are provided to support the research project. After their initial award, resident awardees are encouraged to apply for an additional experience to support ongoing mentorship and research. 3. The fellow award is the only award open to both medical and doctoral trainees. The goal of the fellow award is to provide protected time for clinical fellows or postdoctoral graduate students to generate sufficient expertise or preliminary data to be competitive when applying for future awards. The fellow award provides salary support as well as funds to support a hematology research project for a 2- or 3-year period. Due to the nature of the fellow award, it is not renewable. 4. The early career award allows a physician committed to a career in academic medicine protected time to conduct research. This award provides 4 years of salary support as well as funds to support a hematology research project. Due to the nature of the early career award, it is not renewable. 5. The graduate student abstract achievement award is the only award that provides recognition for research already conducted. This award recognizes doctoral students who are authors of abstracts that have been accepted for an oral or poster presentation at the ASH annual meeting. Initially this award was given as a travel stipend to encourage attendance at the ASH annual meeting; however, the graduate student abstract achievement award is currently given as a stipend in recognition of meritorious science. Previous awardees are encouraged to reapply for the award in subsequent years if the eligibility criteria are met. 6. The graduate student award encourages doctoral students to pursue a career in academic hematology. The graduate student award provides 2 years of funding for salary support, the hematology research project, training-related expenses, and travel to the ASH annual meeting. Due to the nature of the graduate student award, it is not renewable. All trainee awards require the research mentor to be an ASH member and for the research to be conducted in the United States or Canada. For the medical student and resident awards, if the applicant is unable to identify a research mentor at their home institution, then they can request to be “matched” with an ASH research mentor at another institution. The matching process is performed by ASH member volunteers and ASH staff. The early career award (ASH-AMFDP) eligibility criteria differ from the trainee awards. Unlike the trainee awards, the early career award does not require the research mentor to be an ASH member, and it only supports research conducted in the United States. Additional eligibility criteria and respective award benefits can be found at https://www.hematology.org/awards . In the early years of the MRI, ASH advertised to dean’s and financial aid offices of accredited allopathic and osteopathic medical schools. Currently, ASH uses a variety of methods to encourage applicants to apply. Potential applicants and mentors are exposed to opportunities through ASH communication channels including social media (ASH website, X, Facebook, Instagram, LinkedIn, and YouTube), hematology news publications and emails to the ASH membership database. ASH also recruits from science, technology, and mathematics conferences focused on minority clinicians and scientists. ASH Ambassadors are encouraged to share opportunities with potential applicants at their home institutions. Additionally, prior promising applicants who did not receive funding are contacted and encouraged to reapply for the upcoming funding cycle. Nonetheless, the primary source of recruitment is from previous MRI mentors and past/current participants as they become unofficial program ambassadors. Active award recipients present their research experience during the annual meeting at a symposium. This event is the largest platform for recruiting mentors with >300 audience members, including National Institutes of Health (NIH) program officers, academic program chairs/chiefs, past awardees, and the president of ASH. Awards are competitively selected through a rigorous review process modeled on the NIH study section and led by ASH member volunteers with relevant scientific expertise and understanding of program goals. This process includes adherence to the ASH conflict-of-interest policy. The review considers multiple aspects of each application including feasibility, innovation, and significance of the research proposal, the potential of the applicant, personal interest in hematology, likelihood of retention in the field of hematology, and the mentor/mentorship plan. The mentor’s demographic background is not part of the review discussion. Eligible applicants applying for a second year of funding are required to demonstrate productivity during their first year of funding, completion of all prerequisites from the prior cycle, and maintain the support of their research mentor to be competitive for additional funding. Based on the American Association of Medical Colleges endorsement of holistic review as an effective strategy to recruit diverse applicants, , most study sections use holistic review to allow for the consideration of experiences and personal attributes in addition to traditional metrics. Members of the study section are trained in the process. A driving criterion is the applicant’s likelihood of retention in the field of hematology as reflected in the personal statement and letters of recommendation. The “distance traveled” is also considered, recognizing that there may be distinct challenges that applicants have overcome and understanding that all persons are not given the same opportunities to thrive. Training members of the study section in holistic review has evolved over time. Initially, the chair of the study section would describe the goal at the beginning of the study section and as needed during the study section. Currently, study section reviewers are emailed a short 5-minute video before the preliminary review process. The video not only emphasizes the goal of holistic review but also describes the impact of implicit bias. Four awards (medical student, resident, fellow, and graduate student award) are decided over a 2-day study section. The graduate student abstract achievement award is decided separately during a half-day study section. The graduate student abstract achievement award study section uses the NIH scoring system; however, reviewers do not incorporate a holistic review of the application. Applicants are evaluated on their research potential, leadership, interest in hematology, and quality of their submitted abstract. The early career award selection is a multistep process. Applicants submit their research experience, career objectives, personal references, and a mentored training plan. Semifinalists are then selected for interviews that allow for the applicants to fully describe their research interests, institutional resources, and research environment. In addition to funding received from ASH, ASH donors generously contribute to the ASH Foundation, and 100% of funds designated for MRI are used to extend the reach of the initiative. The short-term success of the medical and graduate student MRI awards was evaluated by comparing MRI graduation percentages with the national estimates of minority student graduation percentages. Short-term success was focused only on the medical student (2004-2018) and graduate student abstract achievement (2011-2017) awards because the other trainee awards were recently initiated, and recent awardees have not had time to graduate from their training programs. Long-term success was determined by continued engagement with ASH after their award and/or retention into hematology. Descriptively, continued engagement with ASH was evaluated for all MRI awardees. Retention in hematology was defined as being board eligible or board certified in hematology or hematology-oncology. This metric was evaluated for medical student (2004-2014 cohorts to allow time for the completion of education and fellowships) and early career, and it was compared with the national estimate of minority hematology-oncology faculty in academia. Data related to graduation and specialty areas was obtained online from publicly available information that was accessed from June to August 2023. Data related to engagement with ASH were obtained from ASH databases through November 2023. There were 405 individuals across the 6 awards programs from 2004 to 2022. The majority of these individuals received the medical student award (240 awardees from 2004 to 2022). Comparatively, the fellow and graduate student awards are the newest awards, and from 2020 to 2022, there were 17 and 19 recipients, respectively. Although the majority of awardees self-identified as Black , the early career, fellow, and graduate student awards had >40% of awardees identify as Hispanic/Latin/o/a/x ( , , respectively). For all awards, except the early career award, the majority of awardees self-identified as female . For the early career award, 13 of 26 (50%) identified as female, and 13 of 26 (50%) identified as male . The majority of MRI recipients attended or were faculty (early career awardees) at an academic institution in the United States. Of all the awards, the medical student award has had the largest geographical reach . Notably, across the 6 different awards, there were 13 states that have never had a recipient (Arizona, Connecticut, Idaho, Maine, Mississippi, Montana, Nevada, North Dakota, Rhode Island, South Dakota, Vermont, West Virginia, and Wyoming). Awardees also came from programs outside of the United States. At the time of the award, 2 medical student award recipients attended medical school in Canada, and 1 medical student award recipient attended medical school in the Caribbean . One resident award recipient was in a residency program in Canada at the time of the award , and 1 graduate student abstract achievement award recipient attended a doctoral program in Canada . Outcomes Medical school attrition was determined for the medical student cohorts from 2004 to 2018. During this time frame, there were 184 recipients; however, 3 were not evaluable (1 died, 1 currently in MD/PhD program, and 1 lost to follow-up). Of the 181 evaluable awardees, medical school attrition was 2.2% (95% confidence interval [CI], 0.61-5.6). Medical student awardees attrition from medical school was descriptively similar to the attrition reported for White non-Hispanic students (2.3%) although not statistically different from the reported underrepresented minority student attrition (5.6%). Notably, the 4 participants who did not graduate medical school all received advanced degrees (2 doctoral-level degrees and 2 master-level degrees). Graduate school attrition was determined for graduate student abstract achievement cohorts from 2011 to 2017. Of the 32 recipients, graduate school attrition was 0% (1-sided 97.5% CI, 10.6). This was significantly lower than the 36% attrition reported for underrepresented minority doctoral students in science and engineering fields. The MRI provides longitudinal support and encourages recipients to stay on the pathway and apply for subsequent awards with the anticipated outcome of increased likelihood of retention in the field. Of the 77 individuals who received >2 MRI awards, 25 received progressive awards (ie, medical student awardee subsequently received a resident award). Long-term outcomes were evaluated for medical student cohorts from 2004 to 2014. There were 97 board-eligible/board-certified awardees, and over half (55/97 [56.7%]) were currently in academic positions (ie, not currently in training). Furthermore, 14.4% (95% CI, 8.1-23.0) were board eligible or board certified in hematology . Comparatively, only 5.7% of medical oncology faculty, which includes hematology-oncology, were considered underrepresented in medicine in 2019. Although not subspecialized in hematology, 1 medical student awardee (board certified in radiation oncology) presented a poster at the ASH annual meeting in 2021, demonstrating active engagement in hematology research. Inclusion of this participant would bring the medical student retention in hematology proportion to 15 of 97 or 15.5%, with 11 of 15 (73%) currently at academic institutions. The majority of early career recipients (25/26 [96%]) remain in academia. Their specialties include the following: 14 (54%) in adult hematology; 6 (23%) in pediatric hematology; 3 (11.5%) in hematopathology; 1 (3.8%) in transfusion medicine; 1 (3.8%) in psychiatry; and 1 (3.8%) in pulmonary medicine. Overall, 23 of 26 or 88.5% (95% CI 70.0%, 97.6%) are practicing hematologists. This percentage is significantly greater than the 5.7% underrepresented minority faculty in medical oncology reported in national estimates in 2019. MRI alumni were actively engaged in hematology research beyond their MRI experience. From 2004 to 2022, 225 of 380 awardees (59%) were authors on 1105 ASH abstracts presented at the annual meeting (798 abstracts presented as poster presentations; 307 abstracts presented as oral presentations). This does not include any abstracts that may have been presented before the initial MRI award year. MRI alumni also remain engaged in ASH through volunteer leadership roles. Forty-five alumni have served in 353 different roles, including reviewers on study sections, contributing editors for ASH publications, committee members, chairs of committees, and serving on the ASH executive committee . Additionally, 42 of 234 individuals (18%) who no longer receive complementary ASH membership as a benefit of their award continue to renew their ASH membership. Funding ASH has invested more than $15 million to fund these experiences in hematology research since the MRI’s inception. This figure represents only award funds committed to recipients and does not include any other costs associated with programming or indirect costs associated with the initiative. Medical school attrition was determined for the medical student cohorts from 2004 to 2018. During this time frame, there were 184 recipients; however, 3 were not evaluable (1 died, 1 currently in MD/PhD program, and 1 lost to follow-up). Of the 181 evaluable awardees, medical school attrition was 2.2% (95% confidence interval [CI], 0.61-5.6). Medical student awardees attrition from medical school was descriptively similar to the attrition reported for White non-Hispanic students (2.3%) although not statistically different from the reported underrepresented minority student attrition (5.6%). Notably, the 4 participants who did not graduate medical school all received advanced degrees (2 doctoral-level degrees and 2 master-level degrees). Graduate school attrition was determined for graduate student abstract achievement cohorts from 2011 to 2017. Of the 32 recipients, graduate school attrition was 0% (1-sided 97.5% CI, 10.6). This was significantly lower than the 36% attrition reported for underrepresented minority doctoral students in science and engineering fields. The MRI provides longitudinal support and encourages recipients to stay on the pathway and apply for subsequent awards with the anticipated outcome of increased likelihood of retention in the field. Of the 77 individuals who received >2 MRI awards, 25 received progressive awards (ie, medical student awardee subsequently received a resident award). Long-term outcomes were evaluated for medical student cohorts from 2004 to 2014. There were 97 board-eligible/board-certified awardees, and over half (55/97 [56.7%]) were currently in academic positions (ie, not currently in training). Furthermore, 14.4% (95% CI, 8.1-23.0) were board eligible or board certified in hematology . Comparatively, only 5.7% of medical oncology faculty, which includes hematology-oncology, were considered underrepresented in medicine in 2019. Although not subspecialized in hematology, 1 medical student awardee (board certified in radiation oncology) presented a poster at the ASH annual meeting in 2021, demonstrating active engagement in hematology research. Inclusion of this participant would bring the medical student retention in hematology proportion to 15 of 97 or 15.5%, with 11 of 15 (73%) currently at academic institutions. The majority of early career recipients (25/26 [96%]) remain in academia. Their specialties include the following: 14 (54%) in adult hematology; 6 (23%) in pediatric hematology; 3 (11.5%) in hematopathology; 1 (3.8%) in transfusion medicine; 1 (3.8%) in psychiatry; and 1 (3.8%) in pulmonary medicine. Overall, 23 of 26 or 88.5% (95% CI 70.0%, 97.6%) are practicing hematologists. This percentage is significantly greater than the 5.7% underrepresented minority faculty in medical oncology reported in national estimates in 2019. MRI alumni were actively engaged in hematology research beyond their MRI experience. From 2004 to 2022, 225 of 380 awardees (59%) were authors on 1105 ASH abstracts presented at the annual meeting (798 abstracts presented as poster presentations; 307 abstracts presented as oral presentations). This does not include any abstracts that may have been presented before the initial MRI award year. MRI alumni also remain engaged in ASH through volunteer leadership roles. Forty-five alumni have served in 353 different roles, including reviewers on study sections, contributing editors for ASH publications, committee members, chairs of committees, and serving on the ASH executive committee . Additionally, 42 of 234 individuals (18%) who no longer receive complementary ASH membership as a benefit of their award continue to renew their ASH membership. ASH has invested more than $15 million to fund these experiences in hematology research since the MRI’s inception. This figure represents only award funds committed to recipients and does not include any other costs associated with programming or indirect costs associated with the initiative. What began in 2003 as a project of an ASH ad hoc committee has evolved into a longitudinal pathway of awards supporting individuals belonging to historically disadvantaged groups to become and remain involved in academic hematology. The ASH MRI program was built upon evidence-based best equity practices (ie, holistic file review, mentorship, and research stipends). , In fact, studies have shown that academic institutions that use holistic review improve recruitment of targeted populations such as underrepresented minority students, female students, first-generation college students, and students from disadvantaged backgrounds. Success of the ASH MRI program is evident in the significantly lower graduate school attrition proportions for minority doctoral students than what would be expected (0% vs 36%) and minority medical school attrition percentages that were descriptively similar to those of White non-Hispanic medical students (2.2% vs 2.3%). Notably, 14.4% (95% CI, 8.1-23.0) of medical student awardees are now either board eligible or board certified in hematology, which is significantly greater than what would be expected (5.7% underrepresented medical oncology faculty). The early career award has been highly successful with 88.5% (95% CI, 70.0-97.6) of the awardees currently practicing as hematologists, and almost all (25/26 [96%]) remain in academia. Moreover, MRI alumni continued their engagement in hematology research after their award, as demonstrated by presenting research at the ASH annual meeting and volunteering with ASH. Strengths of the MRI program include the longstanding financial commitment from ASH and ASH members. Additionally, ASH’s responsiveness to feedback from program alumni and mentors triggered the creation of additional awards to create a longitudinal pathway of support. Outcomes related to medical specialties were obtained online from publicly available data. As a result, retention in hematology was limited to medical researchers who were either board eligible or board certified in hematology. This approach resulted in a conservative estimate of retention in hematology because there are researchers who are not board eligible or certified in hematology but are still actively engaged in hematology. Additionally, because doctoral researchers do not undergo certification, we were unable to assess the retention in hematology for these researchers. Another limitation of the evaluation was selecting the appropriate comparison group because there were not published estimates of underrepresented practicing hematologists outside of faculty at academic institutions. Future directions include implementing a standard annual assessment tool for all MRI awardees, which will allow for the long-term tracking of the outcomes of interest. Our next steps also include a formal mixed methods assessment to elucidate the impact of the award on one’s career. A subset of awardees from each program will be interviewed using the social cognitive career theory as a conceptual framework to understand how experiences in the program influenced their academic/career choices and affected their self-efficacy. To support the states who have not yet had an MRI awardee, next steps include the identification of local active ASH members who could act as ambassadors for the MRI program. As mentioned previously, in 2018, the ASH Ambassador Program was created to help improve the geographical diversity of MRI applicants, but we do not have an ambassador in every state yet. In summary, the ASH MRI program has provided research opportunities for members of historically disadvantaged populations while having the foresight to address the barriers that have traditionally prevented these individuals from leveraging research opportunities and awards (ie, provided stipends, annual travel bursaries to attend meetings, and mentoring, etc). ASH is the only society that offers a longitudinal pathway of support for both medical and doctoral researchers who identify as underrepresented minorities. This initiative serves as just 1 facet in the ASH commitment to advancing health equity for individuals affected by blood disorders. ASH continues to stand for diversity, equity, and inclusion in a climate in which diversity, equity, and inclusion are sometimes under threat. As long as racial disparities in health care exist, committed efforts from ASH and other societies are necessary to improve the outcomes of patients. ASH supports underrepresented individuals in hematology and strives to evaluate and dismantle the causes of inequitable health outcomes for those affected by blood disorders. This is achieved through programs such as the MRI, as well as clinically focused work that includes reconsidering the use of race as a proxy for biologic and genetic differences in all areas of hematology-oncology.
Intracellular Transport of Silver and Gold Nanoparticles and Biological Responses: An Update
8b8c0e76-1550-4891-af10-38b6f4e84027
5983807
Preventive Medicine[mh]
Metal nanomaterials are universally considered as promising multifunctional platforms for wide purposes due to their peculiar photonic, electronic, catalytic, and therapeutic properties, versatile methods of synthesis ensuring wide size and shape features, and surface functionalization. In the biomedical field, the most exploited metal-based NPs are silver (AgNPs) and gold (AuNPs) nanoparticles (NPs), the first due to their biocide activity, and the second for their photoactivation capability, inert character, biocompatibility, and easily and high yield of production . They are efficiently used as therapeutic agents for cancer treatment and as medical tools for bioimaging and biosensing . In the food sector, AuNPs are used as dietary supplements, whereas AgNPs are used in food packaging because of their antimicrobial properties. In particular, the antimicrobic activity of AgNPs is the reason for their increasing use in environmental treatments (e.g., air disinfection, water disinfection, groundwater and biological wastewater disinfection) and surface disinfection (e.g., silver-nanoparticle-embedded antimicrobial paints, antimicrobial surface functionalization of plastic catheters, antimicrobial gel formulation for topical use, antimicrobial packing paper for food preservation, silver-impregnated fabrics for clinical clothing) . Consequently, the many consumer products in which AgNPs are present (estimated at 14% of products) are increasing the risk of exposure of humans—whose benefits and risks are reported by us in Panzarini 2017 —and the AgNPs release into the environment, which in turn amplifies the possible interactions with animals, plants, and humans . During the production and use of nanomaterials (NMs), very high is the possibility of exposure for workers, consumers and environment, but the derived effects cannot be precisely predicted because of the particulate and molecular identity of the nanoscaled materials . Furthermore, there is the difficulty to identify companies producing or processing NMs, because many companies are not classified as nanotechnology companies . A growing number of works directly or indirectly expose workers to probability to interact with NPs, and it is estimated that about 6 million workers will be potentially exposed to NPs in 2020, but there is still little data about the risks. The occupational activities with substantial probability of worker exposure to NPs identified by the European Agency for Safety and Health include construction, health care, energy, the automobile and aerospace industry, the chemical industry and electronics, and communication. It is very important to identify hazards from NMs impact and to define risks and strategies for preventing exposed workers. In general, humans become susceptible to NMs because of the lack of capability to tolerate and respond to these exogeneous toxicants relatively to inherited and genetic susceptibility, epigenetic induced modifications and age, pathological conditions, and lifestyle factors induced alterations, as recently reviewed by Iavicoli et al. . Further, these factors are amplified by the great variability of NMs physicochemical properties that, in turn, dictate the response of humans to exposure. However, a systematic toxicity database regarding this does not still exist. The human body can come into contact with NMs of synthetic origin above all through three main ways: inhalation through the respiratory system, ingestion through the gastrointestinal tract, and absorption via the cutaneous route . When NPs are able to overcome these barriers, further barriers protect internal organs in the human body. These internal or secondary barriers are the blood-brain barrier, which protects the brain, the blood-testicular barrier, which protects the male reproductive system, and the placenta, which protects the developing embryo. Nanostructures, once inhaled, ingested, or administered topically, can reach the bloodstream and be transported and accumulated at the level of various organs. In vivo animal studies have shown that NPs can be located in the blood circulation and in the central nervous system (CNS), inducing inflammatory reactions at the pulmonary level and problems at the cardiovascular level and beyond to accumulate in various organs such as liver, spleen, lymph nodes, and bone marrow . Airways are one of the main routes by which the human body comes into contact, voluntarily or accidentally, with nanostructures. The deposition efficiency of the inhaled NPs depends mainly on their diameter and aerodynamic characteristics: in fact, size and shape are important for determining which compartment of the respiratory system will be mainly exposed between the upper airways, the lower airways or the alveoli. The particles, in general, are deposited efficiently in the entire respiratory tract, from the nasal cavity to the alveoli, through diffusional mechanisms . The small NPs have the possibility to proceed more deeply in the respiratory system, to settle and be absorbed by the pulmonary epithelium entering the circulation, while those with a larger diameter are stopped at the upper respiratory cavity and expelled through mechanisms of mucociliary clearance . Mucociliary transport is essential for clearance of the respiratory tract, while at the alveolar level, the NPs translocate via transcytosis through the epithelium of the respiratory tract. Here, they reach the pulmonary interstitium, where they can subsequently be phagocytized by alveolar macrophages or enter the bloodstream directly or via the lymphatic pathway . Other studies suggest that inhaled NPs, after being deposited in the lungs, evade the control of alveolar macrophages and manage to infiltrate interstitial space, by translocation from the alveolar spaces through the epithelium . Furthermore, the translocation of inhaled NPs to extra pulmonary sites, such as the circulatory system, the heart, the liver, and the brain , is possible, even if the mechanisms by which translocation occurs are not completely clarified. The gastrointestinal route is potentially important for consumer, but it is considered less relevant for workers, at least in comparison to the pulmonary route. However, it is important to underline that a percentage of inhaled nanoparticles are cleared by mucociliar cells into the oral cavity and ingested into the gastrointestinal tract . Also, skin contact with nanomaterials can also lead to adverse consequences. Estimates of possible dermal exposure to manufactured NMs in the workplaces have been reported. Certain metals, such as nickel, are also known to cause dermatitis. However, the three layers of skin (epidermis, dermis, and subcutaneous) make it difficult for ionic molecules to penetrate through. Furthermore, no evidences have been shown about the penetration through intact or damaged skin into systemic circulation . Biomarkers, “chemicals, their metabolites, or the products of an interaction between a chemical and some molecules or cells that are measured in the human body” (Committee on Human Biomonitoring for Environmental Toxicants, National Research Council, 2006) have a great importance in occupational medicine, since they give information about the exposure over time and through different routes of exposure. Furthermore, they give advises about the toxicokinetics of several substances among workers. When discussing on exposure assessment, there are three types of biomarkers that may be useful: exposure biomarkers, effect biomarkers and susceptibility biomarkers. Exposure biomarkers provide information on the route and on the source of exposure, such as assessment of a worker’s current exposure to solvents and some metals. Biomarkers of effect give an assessment of the consequence elicited by chemicals on physiological processes. They are indicators of an early health effect (possible health impairment; critical effect) or a clinical effect (disease). Susceptibility biomarkers indicate the relationship between the natural characteristics of an organism and the effects of exposure to a chemical. They can help to define the most critical moments when exposures can be more dangerous . In general, biomarkers are usually measured in biological liquid, such as urine, saliva, and blood. For example, different studies have demonstrated the potential relevance of pulmonary cytochines as possible biomarker of effect for the evaluation of lung exposure to NMs . However, this analysis is carried out by an invasive method, i.e., the sample of broncho-alveolar lavage liquid (BALF) and cannot be used as routine screening in humans. Recently, it has been shown that the presence of cytokines may be performed by a non-invasive procedure that analyzes the exhaled breath condensate . Other data in literature report in experimental animals a comparison between the amount of metal NPs, such as silver and gold, and the concentration of the elemental metal found in the blood . However, measuring the concentration of a metal in elemental form is not a correct way to evaluate a specific marker of exposure, because there would be an implication in the interpretation of data. Farther, the screening of relevant biomarkers of exposure is a difficult task for NMs compared to other substances, since there is not much information about their absorption, biodistribution and excretion. The interactions of NPs with biological systems, including the entry into cells, play a key role in executing their functions and eventual toxicity. In fact, it is known that the NPs small size can allow an easy penetration into the cells and translocation among different cells, tissues, and organs that are remote from the portal of entry to the body, ultimately representing a great risk to human health. Many routes are used by NPs to enter the human body, such as inhalation, ingestion, skin penetration and/or injection. At the cellular level, NPs can enter into cells through intracellular, i.e., phagocytosis, macropinocytosis, clathrin-mediated, caveolin-mediated, and non-clathrin and non-caveolin-mediated endocytosis, paracellular, and transcellular pathways . The design of new biological functions or the prediction of the toxicological consequences of metal NPs in vivo first require the knowledge of their interplay, also accidental, with target cells and tissues that innately have barriers to prevent the entry of foreign particles. Physicochemical and mechanical characteristics of NPs, such as stability, size, surface charge, shape, hydrophobicity, surface chemistry, and protein and ligand conjugated, influence cellular internalization and trans-barrier transport. NPs–cell membrane interactions may also influence their intracellular trafficking, like sorting into different intracellular compartments, cellular retention, and biological fate, no matter if the final outcomes are adverse or favorable. Thus, understanding the effects of various NPs characteristics on cellular and biological processes and manipulating the NPs characteristics could help in designing NPs efficient and safe, avoiding or facilitating internalization to better exploit the potentiality of the use of nanoconstructs. The presence of metal NPs in biological systems are a big concern and it remains difficult to give general warnings, since literature data are unlike in consequence of the NPs type, cell lines, experimental designs, and different endpoints of observations considered . In addition, the lack of suitable user-friendly methodologies to investigate the extent and mode of NPs-cell interactions amplifies the scarcity of detailed investigations. Literature data concord that several hazardous effects occur at cellular level, like generation of reactive oxygen species, lipid peroxidation, genotoxicity and mutagenesis, apoptotic or necrotic cell deaths, mitochondrial dysfunction and changes in cell morphology . Here, the existing knowledge and uncertainties regarding the biological consequences of the widely used Au and Ag NPs will be highlighted in relation to cellular uptake, localization and translocation of NPs. Moreover, a section will be dedicated to methods available for qualitative and quantitative analysis of cell-associated NPs, that allow to distinguish between cell surface bound and internalized NPs, and to follow NPs intracellular fate and speciation. A significant issue to reduce possible dangers for human health or improve the efficacy of NPs, relies on the thorough knowledge of the biological interactions (cells, tissues, or organisms) and subsequent internalization . Since cell membranes allow free diffusion only of small molecules (oxygen, carbon dioxide, water, and small hydrophobic or nonpolar molecules) or particles sized 10–30 nm, various distinct pathways for cellular internalization of particulate matter (lipids, proteins, glucose and other extracellular substances), pivotal for exerting an effect at a cellular level, exist. These pathways are categorized as endocytosis, a mechanism that internalizes cargo in transport vesicles derived from plasma membrane . The endocytosis mechanisms include phagocytosis (uptake of particles by specialized cell types, i.e., macrophages, monocytes and neutrophils) and pinocytosis (uptake of extracellular fluids and soluble substances). Pinocytosis can be further divided into four mechanisms depending on the size of the vesicles and the protein involved in their formation. They include (1) macropinocytosis (an actin dependent pathway initiated with ruffling of the plasma membrane followed by large vacuoles named macropinosome); (2) clathrin-mediated endocytosis, also known as receptor-mediated endocytosis or RME (internalization of biomolecules via clathrin-coated vesicles containing plasma membrane specific receptors); (3) caveolae-mediated endocytosis (internalization of extracellular ligand and biomolecules through flask-shaped invaginations, named caveolae, consisting of the cholesterol-binding protein caveolin); (4) non-clathrin- and non-caveolin-mediated endocytosis . In addition to the intracellular endocytic delivery system, vesicles, rather than undergoing degradation into cytoplasm, can be transported to the other end of cell surface and released into the extracellular environment. This route is known as transcellular delivery pathway. Finally, a passive process, named the paracellular delivery pathway, for transport of molecules is also known. The molecules transit between adjacent cells via tight junctions whose pores with diameters of up to 15 angstrom and negative charge regulate the delivery . Each type of NPs exhibits a preferred pathway for cellular internalization, and several investigations suggest that size, surface charge, shape, functionalization, and protein corona dictate entry and subsequent cytosolic access of NPs into living cells, as reported in . NPs size is the main feature affecting uptake, both pathway type and amount, that is strictly dependent on internalizing cells size, cell membrane tension and cell spreading. Also charge parameters of NPs are crucial for the NPs uptake: (a) cationic and neutral NPs are efficiently transported into the cells; (b) cationic NPs have a higher uptake than neutral ones; (c) neutral NPs are endocytosed in the caveolae-mediated pathway; (d) cationic NPs are transported using the paracellular pathway; (e) cationic NPs commonly use the clathrin-mediated pathway . NPs shape strongly influences internalization and plays a key role in NPs designing . Due to the curvature, spherical NPs have a higher internalization probability than asymmetrically shaped ones. In addition, the NPs shape dictates the accumulation in different organs or tissues. For example, in the lung discoidal-shaped NPs are internalized more than spherical or cylindrical ones; conversely, the liver cells internalize better the cylindrical NPs . Finally, nano-worms are internalized by fibrosarcoma and breast cancer cells greater than spherical NPs . Other important characteristics of NPs that can be modulated to positively or negatively affect uptake are surface properties. For example, it is worth noting that NPs used in medicine as drug delivery system require a great circulation time in the body for recognizing the specific sites of interest. Thus, the interaction of NPs with plasma proteins that can cause the opsonization of NPs leading to recruitment and clearance by immune cells stimulation or the protein corona formation is very important. Modulation of hydrophobicity and hydrophilicity of NPs by adding molecules as polyethylene glycol (PEG) or zwitterionic agents allows to overcome this concern . A strict correlation exists between uptake of metal NPs and cellular responses. In general, each type of NPs induces specific cell responses that are depending on the same parameters affecting the uptake, i.e., size, shape, surface functionalization and coating. In addition, NPs concentration, a parameter strictly dependent on the rate of uptake, plays a pivotal role for toxicity, that increases when high NPs concentration are used. Metal NPs preferentially enter into the cells via endocytosis. This route transports NPs from extracellular environment to endosomal and to lysosomal sites, thus changing dramatically the environmental conditions of the NPs. In fact, metal NPs pass from neutral pH (of about 7.4) of extracellular medium to nearly pH 6.0 of endosomes, to acidic pH (4.5) of lysosomes, which, in turn, trigger the release of relatively toxic ions in the cell . Also, metal NPs once inside the cytoplasm can be degraded by cytoplasmic enzymes, like cathepsin L, that cause release of free metal ions and change in size and shape. The biodegradation affects cell homeostasis and functions, leading to damage of mitochondria, lysosomes, endoplasmic reticulum that, in turn, activate other mainstream adverse events leading to alteration of proteins, genotoxicity, reactive oxygen species (ROS) production, DNA damage, apoptotic and necrotic cell death, etc. . For this reason, the mechanism of internalization is considered as a “Trojan horse effect” and the release of toxic free ions by lysosomes as “lysosome-enhanced Trojan horse effect” . The toxicity can be overcome by incubation with specific ions chelators, which do not affect the uptake efficiency and do not induce cross-toxicity , or by treatment with lysosomotropic agents neutralizing lysosomal acidity and, consequently, decreasing toxic ions release . As for all metal NPs, size, shape, and surface properties influence their cellular internalization . Chan by using a set of AuNPs with size ranging between 10 to 100 nm, observed that 50 nm sized NPs show the highest uptake by human epitheloid cervix carcinoma (HeLa) cells demonstrating the dependence of uptake from size of NPs . Similar results were obtained by Ko and coworkers that observed that 50 nm AuNPs are internalized by human adipose-derived stem cells more than 15, 30, 75, and 100 nm NPs . Moreover, AuNPs may aggregate and the uptake efficiency of aggregates is controversial and dependent on cell type: in fact, the uptake rate is reduced in HeLa and A549 cells while it is increased in melanoma MDA-MB 435 cells respect to single and monodispersed NPs. The researchers explain this phenomenon because of different endocytosis pathways elicited by cells to engulf AuNPs: in particular, HeLa and A549 cells uptake AuNPs aggregates via receptor-mediated endocytosis, whereas MDA-MB435 via other receptor-independent mechanisms . A few studies have shown enhanced cellular uptake of smaller 1–2 nm-gold nanoclusters (NCs) by Dendritic Cells (DCs) compared to a larger 12 nm gold NPs . Conversely, Fytianos demonstrated DCs more efficiently internalize 50 nm gold NPs respect to 10 nm ones because of the minimum wrapping time required for 50 nm gold NPs compared to the smaller counterparts . AuNPs coated with polyethylene glycol (PEG), polyvinyl alcohol (PVA) or a mixture of both (PEG+PVA)-AuNPs to have either positive or negative surface charge, display different behavior: monocyte-derived dendritic cells (MDDCs) internalize AuNPs but surface modification influenced the uptake amount. Limited uptake was observed for PEG-AuNPs, in contrast, (PEG+PVA)-AuNPs and PVA-AuNPs were largely internalized . Moreover, by using several pharmacological inhibitors, Saha demonstrated that the uptake of cationic AuNPs in both cancer (HeLa) and normal (MCF10A) cells strongly depends on the AuNPs surface monolayer and involves different endocytic pathways as well as specific cell surface receptors (e.g., scavenger receptors) . Most of the attention of researchers has been given to spherical AuNPs, but in recent decades different shapes of AuNPs have been synthetized to improve nanocarriers by modulating AuNPs chemical and physical properties depending on shape . The new synthetized AuNPs include triangles , stars , cubes , octahedrons plates and prisms . In general, spherical AuNPs are efficiently taken up respect to other shapes. In fact, Cho studied the effects of size, shape, and surface chemistry of spherical and cubic Au nanostructures (nanospheres and nanocages, respectively) whose surface is modified with poly(ethylene glycol) (PEG), antibody anti-HER2, or poly(allyamine hydrochloride) (PAA) on their uptake (including both adsorption and internalization) by SK-BR-3 breast cancer cells. The results show that both the size and the surface chemistry of the Au nanostructures influence their uptake by the cells: smaller AuNPs are better internalized than larger ones; PAA functionalized AuNPs are better internalized than anti-HER2 and PEG functionalized ones; cells internalize spherical particles over cubic particles when the surface was modified with PEG or anti-HER2 . However, Nambara suggested that triangular AuNPs are more effectively taken up by RAW 264.7 macrophages and HeLa cells than spherical ones, even if with the same surface area and functionalization . In June 2017, Xie and coworkers synthetized three different shapes (stars, rods and triangles) of AuNPs to investigate the effects of shape on cellular uptake by RAW 264.7 macrophages. The Au nanoconstruct had the same size and the same coating (methylpolyethylene glycol-mPEG) to ensure the absence of other factors. In fact, mPEG allows a neutral surface charge, a well dispersion of AuNPs in aqueous solution and the prevention of adhesion of serum proteins. The highest cellular uptake by RAW 264.7 was observed with triangular shape after 24 h, followed by rods and stars. NPs are internalized as single particles and are localized in endosomes and/or lysosomes in the perinuclear region of the cells. However, different endocytosis pathways are engaged in relation to shapes: stars enter cells through clathrin-mediated process; rods are internalized through both clathrin- and caveolae-mediated endocytosis; triangles cause a strong cytoskeletal rearrangement leading the highest uptake and enter cells via clathrin-mediated endocytosis and dynamin-dependent pathway . Despite the numerous studies about the influence of AuNPs parameters on cellular uptake, the role of cell size on uptake remains unclear. Wang investigated the influence of cell size on the cellular uptake of 50 nm sized PEG-AuNPs by using a method able to modify the size of human mesenchymal stem cells (hMSCs) through micropatterned PVA coated to obtain cells of 20, 40, 60 and 80 nm. Wang demonstrated that large-sized cells have a high total cellular uptake but a low average uptake/cells unit area, while small-sized hMSCs show opposite behaviors. In fact, the high total cellular uptake is due to the large contact area with the NPs; but the large size of cells causes a high membrane tension that requires a high wrapping energy for engulfing of NPs and thus reduces the average uptake/cells unit area . Gold nanoparticles have been found to be very biocompatible and non-toxic according to many reports. Connor demonstrated that AuNPs of different sizes (4, 12, and 18 nm in diameter) and capping agents (citrate, glucose, biotin, etc.) enter into K562 human leukemia cells, do not induce any toxicity and reduce reactive oxygen species levels . Similar results are achieved for other cell lines, such as Raw264.7 mouse macrophages and dendritic cells . In addition, beneficial cell responses exploiting in tissue engineering can be engaged by AuNPs. For example, modulation of the differentiation of stem cells by AuNPs leading to induction of differentiation and bones mineralization and in immunotherapies and vaccine development by targeting DCs has been demonstrated. Even if AuNPs are considered as the highly compatible nanoconstructs, a potential toxicity mainly related to internalization modality has been demonstrated. A paper of Sabella et al. demonstrated the release of free gold ions in monocytoid U937 cells, HeLa cells, human breast adenocarcinoma epithelial MCF7 cells, human colon adenocarcinoma epithelial Caco-2 cells, human neuroblastoma SHSy5Y cells and human hepatoma Huh-7 cells upon interaction with two different types of AuNPs. The two types of AuNPs have identical physico-chemical properties but differences in the ligand shell composition, a stripe like and a random distribution of the ligands named by authors as striped AuNPs and unstructured AuNPs respectively. Unstructured AuNPs enter cells via endocytic pathway and co-localize with lysosomes, while striped AuNPs are taken up via a non-endocytic pathway and mainly distribute in the cytosol. The authors demonstrated that the unstructured AuNPs are more toxic than striped ones suggesting that the induced NPs toxicity is internalization pathway-dependent. It is likely that unstructured AuNPs entrapped in the lysosomes undergo enhanced corrosion and ion leakage, with consequent toxicity to cells . Goodman et al., demonstrated that gold nanoparticles are toxic against Cos-1 mammalian cells depending on surface charge: cationic NPs are toxic, conversely anionic ones do not be toxic . Further, AuNPs are toxic when administered to endothelial SK-Mcl-28 and L929 cells and HeLa cells . AgNPs easily pass the biologic barriers and can translocate from the route of exposure to other vital organs. The interaction between AgNPs and cells and uptake modality and biocompatibility, as already reported for AuNPs, is related to many factors related to both NPs, such as size, shape, surface charge, surface coating, solubility, concentration, and surface functionalization, and to experimental conditions or cells, e.g., distribution of particles, mode of entry, mode of action, growth media, exposure time, and cell type. Primary brain astrocytes, normal human lung fibroblasts (IMR-90), and human glioblastoma cells (U251) internalize AgNPs through lysosomal or endosomal endocytosis . Conversely, macrophages, fibroblasts, and glioblastoma cells uptake AgNPs via macropinocytosis, scavenger receptor and clathrin-mediated mechanisms . Recently, Hsiao used three different brain cells (murine brain astrocyte-like ALT cells, murine microglial BV-2 cells and murine neuroblastoma N2a cells) to study the uptake and toxicity of 10 nm sized AgNPs. The uptake profiles are dose- and cell-dependent: ALT took up the highest amount of AgNPs, followed by BV-2 and N2a cells and cell viability correlates with the uptake levels. Moreover, lipopolysaccharide (LPS)-activated BV-2 cells took up larger amounts of AgNPs than their normal counterpart. Conversely no difference in NPs uptake between normal and LPS-activated ALT and N2a cells are detected. Caveolae-independent and clathrin-independent endocytosis, and phagocytosis are the preferred internalization pathways for ALT cells, while macropinocytosis and clathrin-dependent endocytosis are involved in uptake of BV-2 cells . Depending on size and surface properties, once internalized, AgNPs translocate to the mitochondria and nucleus and elicit alteration of cell morphology, oxidative stress, DNA damage, inflammation, genotoxicity, mitochondrial dysfunction, and consequent induction of apoptosis or necrosis . Smaller AgNPs exhibit an improved ability to pass the plasma membrane, localize inside the cell eliciting a higher toxicity, as demonstrated in spermatogonial stem cells . In fact, uptake and cytotoxicity are amplified with smaller-sized AgNPs due to increased surface area and particle number for the same mass/volume dose, that in turn correlate with a higher rate of Ag + ion release into the cell culture medium . Conversely, in a recent study using AgNPs of 15, 50, and 100 nm, Chen et al. demonstrated that 50 nm AgNPs exhibit the highest adsorption and passive uptake in red blood cells (RBCs), while the smallest 15 nm AgNPs are the most cytotoxic. Conversely, the 100 nm-sized AgNPs aggregate and are not able to pass the plasma membrane . The high values in passage through the biological barriers of smaller sized AgNPs causes NPs accumulation that, in turn, elicits cytotoxicity in lung, stomach, breast and endothelial cells . The shape of AgNPs plays a key role for the uptake and consequent cellular effect. Spherical AgNPs (30 nm) are efficiently endocytosed by A549 cells while very few silver wires (length: 1.5–25 μm; diameter 100–160 nm) are observed inside the cells. When cytotoxicity is considered, silver nanowires induce very high cytotoxicity compared to the minimal effects associated with silver nanospheres. This may be due to the interaction between silver nanowires and plasma membrane rather than the endocytosis mechanism . To improve stability, dispensability, agglomeration, and impart novel functions to AgNPs, a panel of molecules can be used for coating the particles that in turn can also affect the cellular uptake. For example, uncoated AgNPs are largely uptaken by human lung cells respect to citrate/ polyvinylpyrrolidone (PVP)-coated ones because albumin and other human serum proteins suppress their cellular uptake. Conversely, the serum proteins enhance internalization of silica-coated NPs . In general, this difference may be due to higher affinity of the negatively charged cell membranes respect to positively charged AgNPs, thus promoting internalization and intracellular bioavailability of these particles . Zhang et al. have recently reviewed cellular responses to AgNPs in in vitro models. There are several studies that dealt with the potential cytotoxicity and genotoxicity induced by AgNPs on both tumor cell lines and normal cell lines . Cytotoxicity associated to AgNPs is related to oxidative stress and release of Ag + ions deriving from dissolution of AgNPs . Once Ag + ions have been released, they interact with tiol groups of antioxidants, i.e., superoxide dismutase (SOD), thioredoxin and glutathione (GSH), causing oxidative stress, DNA damage up to apoptotic cell death . Data in the literature report a correlation between mitochondrial damage and ROS production in cells . AgNPs have a strong effect on mitochondria, leading to mitochondrial membrane potential drop, breaking of respiratory chain, oxidative stress and inhibition of ATP synthesis that give rise to the activation of apoptosis pathway . Also, small AgNPs with a diameter of less than 10 nm are able to cross the nuclear pores, reaching the nucleus and leading to ROS production, DNA damage, cell cycle arrest, and chromosomal aberration in human fibroblasts and glioblastoma cells . Furthermore, it has been observed that AgNPs induce genotoxic effects in HepG2 cells, human mesenchymal stem cells (hMSCs) and human peripheral blood mononuclear cells (PBMC). In particular, Kawata et al. , suggested that AgNPs cause dangerous effects on the DNA (e.g., chromosome aberration) by demonstrating the up-regulation of the DNA repair genes and the increase in micronuclei formation in cells treated with AGNPs low doses (<1.0 mg/L) . AgNPs induce different effects in neurons leading to toxicity: alteration of cell morphology, degradation of cytoskeleton components, perturbations of pre- and postsynaptic proteins, and mitochondrial dysfunction . Other studies indicate that AgNPs reduce cell viability in different cell lines by causing apoptosis through the mitochondrial pathway ; stimulate inflammatory and immunological responses in cells inducing cytotoxicity, elevated secretion of proinflammatory cytokines (such as interleukin-1β, inteleukin-2, tumor necrosis factor α, and prostaglandine E2) and increased blood-brain barrier permeability and immunotoxicity in a size-dependent manner in rat brain microvessel endothelial cells . AgNPs have an important role also in angiogenesis. Gurunathan et al., showed that they act as anti-angiogenesis factor in cells by inducing the activation of phosphatidylinositol-3-kinase /Protein Kinase B (PI3K/Akt) pathway and inhibiting cell proliferation and migration mediated by vascular endothelial growth factor (VEGF) . In another study, Sriram et al., demonstrated that AgNPs have an anti-cancer activity in Dalton’s lymphoma ascites cell line, in a dose-dependent manner AgNPs, by acting on caspase-3 activation and DNA fragmentation . In addition, AgNPs inhibit HIF-1 expression and their downstream targets and this provides new evidences about the effects elicited by AgNPs on cytotoxicity mechanisms and angiogenesis. Some dark sides remain about the effects of AgNPs on the development. Despite several studies demonstrated that various metal NPs have no significant effects on the morphology, viability and differentiation capability of stem cells, only few works reported the effect of AgNPs on human and non-human stem cells. In particular, it has been demonstrated that both AgNPs and Ag + negatively impact the development by changing transcriptomic responses in embryonic stem cells (ESCs) due for AgNPs to the nanosized shape . As already stated, the oxidative stress induction and free silver ions release mediate AgNPs cytotoxicity , even if it is not clear to which degree the toxicity depends on free Ag + or AgNPs. To overcome ions release, AgNPs can be synthesized by design a specific surface capping and/or functionalization that, by interfering with dissolution process, limit or inhibit the release of Ag + . Recently, among surface coatings, such as starch, PVP (poly( N -vinyl-2-pirrolidone), citrate, polymers, etc. there is an increasing interest in using carbohydrates as biomimetic molecules because of their double function: (i) glycans allow synthesis of NPs without toxic chemicals traces; (ii) NPs glycans capping serves as targeting molecules and mediates cellular responses . The advantages in using glucose, fructose or sucrose to form a capping around 30 nm AgNPs that affect their internalization have been reported by us in different cell types (lymphocytes, HeLa and HepG2 cells) . In particular, glucose has been demonstrated a key factor in inducing uptake of AgNPs . Endosomes and lysosomes are the main organelles target of AgNPs . AgNPs, by interacting with the acidic lysosomal compartment, induce ROS production, including superoxide anions (O 2− ), hydroxyl radicals ( • OH), and hydrogen peroxide (H 2 O 2 ), production. Thus, ROS diffusion into the cytoplasm results in oxidative damage to proteins and other organelles, such as mitochondria. In particular, H 2 O 2 dissolutes AgNPs and causes accumulation of Ag + in lysosomes. AgNPs and Ag + can escape from lysosomes, amplifying the increase of ROS in cytoplasm, that, in turn, allow further dissolution of AgNPs with Ag + production. ROS can also mediate the release of Ca 2+ from endoplasmic reticulum (ER) leading to imbalance of calcium homeostasis . In this manner four death pathways are elicited. The first one is the necrotic pathway via rupture of the plasma membrane; the second is the induction of mitochondrion-dependent apoptosis, via alteration of electron transfer; the rupture of lysosomal membrane is the cause of the third death pathway, the lysosome-mediated apoptosis; the last one is the ER-mediated apoptosis. Moreover, AgNPs present in cytoplasm can diffuse into the nucleus through nuclear pores and directly damage DNA and chromosomes, while free Ag + ions released by the AgNPs that have entered the nucleus can contribute to damage DNA . The study of nanoparticle–cell interactions is a key question in the fields of nanomedicine as well as in nanotoxicology. In fact, the amount of nanoparticles internalized by cells or bound to the external surfaces of cells determines the NPs toxic profile while cellular binding and uptake of medically effective NPs decide their efficiency and efficacy. Despite their importance, these processes are under investigated, mainly due to the lack of suitable user-friendly methodologies. The researchers concord that an ideal methodology would require minimal sample preparation, allow sufficient resolution to assess NPs cellular and subcellular localization at the single cell level, allow high sample throughput, and finally be independent of material properties not requiring fluorescent or radioactive labeling of NPs. Several methods, reviewed in Ivask et al. , can be employed to study the cellular uptake, distribution and speciation of metal NPs in cells. Here, we highlight only the most useful techniques for a direct observation of metal NPs, and for the quantitative evaluation of biodistribution, focusing on our experience in the detection of AgNPs and AuNPs. The most common method to visualize silver and gold NPs at a high magnification is electron microscopy, namely transmission (TEM) and scanning (SEM). Even if TEM analysis is a time-consuming technique, its advantage is that it does not require specifically tagged metal NPs, as for example it is needed for fluorescence microscopy, since metallic materials possess higher electron density and can easily visualized under electron beam. Moreover, to increasing the contrast of imaging, dark field observation can be used . Dark-field microscopy has been widely used to visualize interactions between mammalian cells and AgNPs in in vitro but also in in vivo experiments . Anderson et al. used enhanced dark field microscopy to visualize NPs in tissues of animals that were exposed to 20 nm AgNPs and Roth et al. used the same methodology to identify NPs in animals pre-exposed to nanoparticulate metal oxide. Electron microscopy can be used also for the quantification of metal NPs, even if this is really a time-consuming method. In fact, Mass Spectrometry-Based (MS) methods, such as inductively coupled plasma mass spectroscopy (ICP-MS), atomic emission, and optical emission mass spectrometry (AES and OES) are becoming increasingly popular in quantifying cell-associated metal NPs . Although these methods are very sensitive with detection limits in the order of parts-per-billion, MS methods are not able to detect any changes in speciation that may take place during biological exposure but enable only elemental analysis. Time-resolved or single-particle ICP-MS (SP-ICP-MS) encompasses the limitation of ICP. Today, SP-ICP-MS is of growing popularity for sensitive (detection limit in ng/L range) characterization of metal NPs in the field of environmental chemistry as well as in nanotoxicology. SP-ICP-MS has been used to analyze AgNPs and AuNPs in organisms , tissues and to study the dynamics of AgNPs transformations in human plasma . Finally, Raman microspectroscopy imaging represents a powerful bioanalytical method to provide information regarding the chemical composition of a single cell without prior staining. This technique has been used to visualize and map both AgNPs and AuNPs inside the cells. To distinguish between cell surface bound and internalized NPs is pivotal in studying the relationship cells-NPs. The most common techniques employed for this purpose are confocal fluorescence microscopy, TEM and SEM. In particular, TEM allows to reconstruct 3D images and determine the NPs cellular localization by sequential ultrathin specimen sections . Field emission (FE) SEM can also be used to distinguish between intracellular and externally bound NPs with a resolution of 1 nm , by using different accelerator voltages . In addition, electron microscopic techniques can be coupled with energy dispersive X-ray (EDX) analysis, and backscattered electron imaging (FE-SEM) to offer an ideal analytical platform for the characterization of NPs in and on the cells . Using FE-SEM, NPs images can be acquired inside the cells without disruption of the cellular shape, and also the initial steps of NPs incorporation into the cells can be captured. In addition, the matrix used to treat the glass renders the sample highly stable under the required accelerating voltage. By coupling EDX analysis, an analytical technique used for the elemental analysis or chemical characterization based on the interaction of X-ray with the sample, it is possible to confirm the presence of nanoparticles inside the cells. Havrdova first reported the potentiality of FE-SEM for imaging superparamagnetic iron oxide nanoparticles inside hMSCs. FE-SEM allows us to better observe NPs inside the cells but avoids the complex procedure including contrast staining and metal coating. Further, the researchers demonstrated that nanoparticles inside cells can be mapped using FE-SEM as the nanoparticles protrude through the membrane during the dehydration and drying sample preparation steps . By using dark and bright field TEM, coupled with EDX analysis and ICP-MS analysis we studied the uptake and cellular distribution of in vivo AgNPs and AuNPs solutions injected in the mouse tail vein during experiments performed to evaluate the localization of NPs in designing a nanoconstruct to be used in cancer therapies. reports the NPs biodistribution and the semiquantitative analysis of the amount of NPs retained by the various mouse organs. In general, AgNPs and AuNPs were accumulated in larger quantities inside the mouse organs . However, the organs internalizing NPs, and the NPs amounts differ between AgNPs or AuNPs. Indeed, AuNPs are widely distributed in terms of organs and amount than AgNPs. This is probably due the small size of NPs, 10 vs. 30 nm, that permits to pass biological barrier, including blood brain barrier. The kidney and the brain are the organs where we detected the more quantity of AuNPs. We observed AgNPs also in red blood cells of the kidney and in endothelial cells. Probably, these data show that gold nanoparticles moved to the kidney, as expected, since the kidney physiological function is filtering the entire blood flow through the fenestrated endothelium of glomerular capillaries. Different is the localization of AgNPs and AuNPs in the liver: AgNPs are internalized by hepatocytes, while AuNPs by Kupffer and endothelial cells. It is well known that the sinusoidal capillaries in the liver are fenestrated (50–180 nm) and lined with the Kupffer cells, which rapidly uptake AuNPs; conversely, we have hypothesized that AgNPs are probably trapped in the Disse space and can be taken up by hepatocytes. The mechanism is still not well understood. Low amount of AuNPs was detected in the pancreas, while AgNPs number is low in the brain and in the spleen. The analysis of intracellular localization shows that NPs taken up by the cells are found mainly inside mitochondria, nucleus, rough endoplasmic reticulum (RER) as single nanoparticles or smaller clusters. Dark and bright field analysis confirms the presence of NPs inside the cells. In and we report TEM micrographs of mouse organs sections after NPs injection. In addition to the extent of cellular interaction and (intra)cellular localization, various transformations that may take place after the entry of NPs into the cells are important for evaluation of the NPs cellular effects. Only synchrotron methods are truly capable of such analysis. Synchrotron X-ray absorption spectroscopy (XAS) is one of the few methods capable for elemental speciation analysis since it allows to obtain information regarding the oxidation state, symmetry and identity of the coordinating ligand environment for an element of interest by exploiting the ability to tune the energy of incident X-rays. This technique does not require any pretreatment of samples or extraction/isolation of the NPs as can be conducted in situ or in vivo on hydrated cells by using cryo conditions to reduce the risks of artifacts caused by the intensity of the X-ray beam. Gräfe has published a review describing the potential of synchrotron XAS methods in metal speciation analysis in 2014. The study of metallic materials is one of the most ancient scientific fields, as their properties—including strength, toughness, thermal and electrical conductivities, ductility, and high melting point—make metals useful for a wide range of applications, mainly based on the bulk metallic properties. New applications exploit the novel fascinating metal nanomaterials properties, which are size-, shape-, and crystal form–dependent. Metal nanomaterials have a long history of preparation and applications, the field has undergone explosive growth only in recent years as reviewed in Gentile et al. . Metal-based nanostructured materials are used in a variety of food- and medical-related applications. For example, silver-based nano-engineered materials are currently the most common nanocomposites used in food packaging for their antimicrobial capacity, while Au-based NPs are the most common used nanoconstructs in medicine, as drug carrier and imaging tools, due to their chemical/physical properties and biocompatibility. A strong correlation exists between uptake of nanomaterials and biological responses. Our analysis indicates that changes in the size, shape and surface features will affect their cellular uptake, including modality and amount and intracellular fate, that, in turn, elicit a positive or a negative response by cells. Thus, it is possible to envisage several strategies to obtain safer metal NPs: (a) design of specific coatings able to skip or ignite the endocytosis; (b) design of specific procedures to escape or reach the organelles as lysosomes; (c) design a surface coating resistant to acidic pH of lysosomes to avoid free metal ions release. The first two strategies along with functionalization of NPs for specific targeting into the cell cytosol are particularly important in medicine. Conversely, the third strategy represents an approach of industrial interest to realize biocompatible metal NPs useful in food sector. This will lead to a better interactions of human body whit nanomaterials and ease the safety of metal-based NPs also in consideration that release of nano-engineered materials may occur in the workplace and uncertainties still exist regarding several aspects of the risk posed by NPs for workers. In fact, the very few data available suggest that more severe adverse health effects than those caused by larger particles or bulk material may be expected.
Neuroanatomical photogrammetric models using smartphones: a comparison of apps
60b6895c-4b26-47ad-a6cf-4ff50e9f017b
11422470
Anatomy[mh]
In every field of surgery, a deep knowledge of the surgical anatomy of a certain target area is mandatory for a successful operative procedure. For this purpose, over the years, many teaching and learning methods have been described, from the most ancient cadaveric dissection to the most recent virtual reality , each with respective limitations. Until the last decade, anatomy has always been studied on books and atlas, the main drawback being the bi-dimensional aspect of images and thus the inability to perceive the actual anatomical depth. To overcome this major limit, several technologies have been developed like 3D model DICOM Based , 3D photos , 3D printable models , among which photogrammetry . The introduction of photogrammetry techniques is revolutionizing this field, offering powerful tools that overcome the limitations associated with 2D images, particularly in terms of depth perception and viewpoints. Photogrammetry, a method based on the analysis of photographs, allows for the creation of three-dimensional (3D) models and reconstructions. Advanced software like Agisoft Metashape (Agisoft LLC, St. Petersburg, Russia), ReCap Pro (Autodesk, San Rafael, California, USA), Reality Capture Beta (Capturing Reality, Bratislava, Slovakia), , present the opportunity to generate highly accurate photogrammetric models. By capturing multiple images of a brain specimen from various angles and processing them through professional software, researchers can create detailed and precise 3D representations of the brain structure . The use of professional software for photogrammetry in Neuroanatomy offers significant advantages in terms of accuracy. Sophisticated algorithms are employed to align images, calculate camera positions and generate highly detailed 3D models. However, managing these kinds of software requires a certain level of expertise and technical proficiency characterized by a gradual learning curve since the process of image capturing, marking control points and optimizing the model can be complex and time-consuming. Researchers must have a solid knowledge of the software and photogrammetry principles to achieve accurate results. In addition to professional software, smartphone apps have also been developed as accessible tools for photogrammetry in Neuroanatomy studies . These apps leverage the built-in cameras of smartphones and offer user-friendly interfaces for capturing images and generating 3D models. Smartphone apps provide a convenient and straightforward solution at the expense of a certain level of accuracy compared to professional software, since they are built on simplified algorithms. Smartphone apps are designed to be more user-friendly and accessible, allowing researchers to quickly capture images and generate basic 3D models without extensive technical knowledge. Smartphone apps may offer a valuable but less accurate alternative for researchers seeking quick and convenient 3D reconstructions of brain specimens, especially in the field of neurosurgery. These advancements in photogrammetry have broadened the horizons of neuroanatomical research, empowering researchers with diverse tools and approaches to further our understanding of the brain. This study aims to compare, through quantitative analysis, the differences between the neuroanatomical photogrammetric models generated by two smartphone apps, namely Metascan (Abound Labs Inc., New York, NY, US), and 3D Scanner (Lanns lab, New York, NY, US). These two applications have been widely used in previous research for generating photogrammetric human models , and among all the apps available, their free versions do not require any device other than a smartphone, which is why they were selected for this study. Specimens Two head human specimens (4 sides) embalmed and injected with red and blue latex for arterial and venous blood vessels underwent a standard frontotemporal approach with the assistance of a 3D exoscope (Vitom, Karl Storz, Tuttlingen, Germany). The heads were fixed using a 3-pin head holder. Each step of the surgical procedure was separately investigated. In accordance with Italian law and the policies of our institution, we hereby state that the use of specimens for academic research purposes within the scope of this study does not require formal approval from an ethics committee. Our practices comply with all relevant national regulations governing the ethical use of such specimens in a research setting. Dissection procedure The anatomical dissection was divided into five steps to expose well-defined structures: Step 1: skin. Step 2: frontotemporal C-shaped skin incision and exposure of fronto-temporal fascia sparring the superficial temporal artery (STA). Step 3: retrograde dissection according to Oikawa of temporalis muscle to expose the pterional region underneath. Step 4: frontotemporal craniotomy and dural opening to expose frontal and temporal lobes and opercula, with sylvian fissure. Step 5: sylvian fissure splitting to expose the peri-chiasmatic region, in particular: middle cerebral artery, internal carotid artery, posterior communicating artery, anterior choroidal artery, ipsilateral anterior cerebral artery (segment A1), optic nerves, optic chiasm, ipsilateral oculomotor nerve. Photogrammetry and 3D model Each described step of the dissection was scanned using the dual camera system of an iPhone 11 Pro (Apple Inc., CA, USA). A total of 120 photos were taken for each step while 10% of them were discarded being out of focus or out of field. The selected photos of each step were separately processed using either Metascan (Abound Labs Inc., New York, NY, US) or 3D scanner app (Lanns lab, New York, NY, US) to create two 3D models for each step, set to the maximum possible resolution. Two selected smartphone apps were compared through quantitative and qualitative analyses, as well as in terms of app annual fee, model sharing, time for processing, user-friendly interface, the possibility of using manual focus and autofocus for each application. Metascan and 3D Scanner are the most widely used applications in the field of photogrammetry. These programs do not require any additional device other than a smartphone, ending up being accessible to more researchers and not requiring extensive technical knowledge. Quantitative mesh analysis Mesh analysis refers to the evaluation of a 3D mesh, which is the surface representation of an object or scene. A mesh consists of interconnected polygons that approximate the shape and structure of the subject being modeled. Mesh analysis involves examining various properties of the mesh, such as faces, vertices, edges, and face corners. It helps assess the quality, and density of the mesh and the representation of the captured subject. Quantitative Mesh analysis was performed for each model generated by both smartphone apps using Blender (Blender Documentation Team. (2019). Blender 2.81 Reference Manual. https://docs.blender.org/manual/en/2.81/Blender Development Team. (2022). Blender (Version 3.1.0) [Computer software]. https://www.blender.org ) following these parameters: Face: A face is a single surface or polygon that makes up the mesh. Faces are often triangular or quadrilateral but can have more complex shapes like pentagons or hexagons. The number of faces in a mesh is a direct indicator of its density. A mesh with many faces has a higher level of details compared to one with fewer faces, which can impact the model resolution and overall complexity. Edge: An edge is a line that connects two consecutive vertices of a face. Edge count can provide insights of the complexity of the mesh structure. A high number of edges may indicate a more detailed model, but it might also lead to increased computational complexity in some applications. Edges are critical in defining the boundaries of faces. Vertex: A vertex is a 3D point that represents an intersection point between edges in the mesh. Vertices are crucial as they define the shape and positions of points in the mesh. The number of vertices is important for the accuracy of a mesh. A mesh with more vertices can reproduce complex curves and surfaces with greater precision, but this may also come with a higher demand for computational resources. Face Corner: Face corners represent the points where the edges of a face meet. Despite not being frequently adopted as a parameter for definition, face corners count can be directly related to mesh complexity and detail resolution because they determine how the faces are connected to each other. Analyzed factors Two smartphone apps were compared in terms of annual app fee, model sharing, time for processing, user-friendly interface, the possibility of using manual focus and autofocus for each application. Qualitative Visual inspection of the photogrammetric model The 3D models were evaluated by an experienced neuroanatomist through visual inspection of specific structures at each step: step 1) representation of skin; step 2) representation of STA and muscle; step 3) representation of squamous suture, bone, and deep fascia of temporalis muscle; step 4) representation of cortical surface; step 5) representation of chiasm, optic nerve bilaterally, internal carotid artery, middle cerebral artery posterior communicating artery, anterior cerebral artery, anterior choroidal artery. The visual inspection was made using Blender (Documentation Team. (2019). Blender 2.81 Reference Manual https://docs.blender.org/manual/en/2.81/Blender Development Team. (2022). Blender (Version 3.1.0) [Computer software]. https://www.blender.org ). The macroscopic differences between the models generated using the two apps were recorded using the snapshot function. Statistical analysis Values were reported as mean ± standard deviation (SD). The ANOVA test was used to compare the quantitative continuous variables between Metascan and 3D Scanner among all steps. Statistical significance was predetermined at an alpha value of 0.05. (confidence interval 95%) BlueSky Statistics(Copyright © 2024 BlueSky Statistics) was used for data analysis (Table ). Two head human specimens (4 sides) embalmed and injected with red and blue latex for arterial and venous blood vessels underwent a standard frontotemporal approach with the assistance of a 3D exoscope (Vitom, Karl Storz, Tuttlingen, Germany). The heads were fixed using a 3-pin head holder. Each step of the surgical procedure was separately investigated. In accordance with Italian law and the policies of our institution, we hereby state that the use of specimens for academic research purposes within the scope of this study does not require formal approval from an ethics committee. Our practices comply with all relevant national regulations governing the ethical use of such specimens in a research setting. The anatomical dissection was divided into five steps to expose well-defined structures: Step 1: skin. Step 2: frontotemporal C-shaped skin incision and exposure of fronto-temporal fascia sparring the superficial temporal artery (STA). Step 3: retrograde dissection according to Oikawa of temporalis muscle to expose the pterional region underneath. Step 4: frontotemporal craniotomy and dural opening to expose frontal and temporal lobes and opercula, with sylvian fissure. Step 5: sylvian fissure splitting to expose the peri-chiasmatic region, in particular: middle cerebral artery, internal carotid artery, posterior communicating artery, anterior choroidal artery, ipsilateral anterior cerebral artery (segment A1), optic nerves, optic chiasm, ipsilateral oculomotor nerve. Each described step of the dissection was scanned using the dual camera system of an iPhone 11 Pro (Apple Inc., CA, USA). A total of 120 photos were taken for each step while 10% of them were discarded being out of focus or out of field. The selected photos of each step were separately processed using either Metascan (Abound Labs Inc., New York, NY, US) or 3D scanner app (Lanns lab, New York, NY, US) to create two 3D models for each step, set to the maximum possible resolution. Two selected smartphone apps were compared through quantitative and qualitative analyses, as well as in terms of app annual fee, model sharing, time for processing, user-friendly interface, the possibility of using manual focus and autofocus for each application. Metascan and 3D Scanner are the most widely used applications in the field of photogrammetry. These programs do not require any additional device other than a smartphone, ending up being accessible to more researchers and not requiring extensive technical knowledge. Mesh analysis refers to the evaluation of a 3D mesh, which is the surface representation of an object or scene. A mesh consists of interconnected polygons that approximate the shape and structure of the subject being modeled. Mesh analysis involves examining various properties of the mesh, such as faces, vertices, edges, and face corners. It helps assess the quality, and density of the mesh and the representation of the captured subject. Quantitative Mesh analysis was performed for each model generated by both smartphone apps using Blender (Blender Documentation Team. (2019). Blender 2.81 Reference Manual. https://docs.blender.org/manual/en/2.81/Blender Development Team. (2022). Blender (Version 3.1.0) [Computer software]. https://www.blender.org ) following these parameters: Face: A face is a single surface or polygon that makes up the mesh. Faces are often triangular or quadrilateral but can have more complex shapes like pentagons or hexagons. The number of faces in a mesh is a direct indicator of its density. A mesh with many faces has a higher level of details compared to one with fewer faces, which can impact the model resolution and overall complexity. Edge: An edge is a line that connects two consecutive vertices of a face. Edge count can provide insights of the complexity of the mesh structure. A high number of edges may indicate a more detailed model, but it might also lead to increased computational complexity in some applications. Edges are critical in defining the boundaries of faces. Vertex: A vertex is a 3D point that represents an intersection point between edges in the mesh. Vertices are crucial as they define the shape and positions of points in the mesh. The number of vertices is important for the accuracy of a mesh. A mesh with more vertices can reproduce complex curves and surfaces with greater precision, but this may also come with a higher demand for computational resources. Face Corner: Face corners represent the points where the edges of a face meet. Despite not being frequently adopted as a parameter for definition, face corners count can be directly related to mesh complexity and detail resolution because they determine how the faces are connected to each other. Two smartphone apps were compared in terms of annual app fee, model sharing, time for processing, user-friendly interface, the possibility of using manual focus and autofocus for each application. The 3D models were evaluated by an experienced neuroanatomist through visual inspection of specific structures at each step: step 1) representation of skin; step 2) representation of STA and muscle; step 3) representation of squamous suture, bone, and deep fascia of temporalis muscle; step 4) representation of cortical surface; step 5) representation of chiasm, optic nerve bilaterally, internal carotid artery, middle cerebral artery posterior communicating artery, anterior cerebral artery, anterior choroidal artery. The visual inspection was made using Blender (Documentation Team. (2019). Blender 2.81 Reference Manual https://docs.blender.org/manual/en/2.81/Blender Development Team. (2022). Blender (Version 3.1.0) [Computer software]. https://www.blender.org ). The macroscopic differences between the models generated using the two apps were recorded using the snapshot function. Values were reported as mean ± standard deviation (SD). The ANOVA test was used to compare the quantitative continuous variables between Metascan and 3D Scanner among all steps. Statistical significance was predetermined at an alpha value of 0.05. (confidence interval 95%) BlueSky Statistics(Copyright © 2024 BlueSky Statistics) was used for data analysis (Table ). Quantitative mesh analysis (Table ) For each step, 4 models were generated, using both Metascan and 3D Scanner, for a total number of 40 photogrammetric models. Number of vertices In the models processed with Metascan, on average, a superiority of 69.06% was observed in step 1, 97.73% in step 2, 76.59% in step 3, 25.04% in step 4, and 28.62% in step 5 compared to model processed with 3D scanner. Number of edges In the model processed with Metascan, on average, a superiority of 69.97% was observed in step 1, 99,28% in step 2, 78.58% in step 3, 25.57% in step 4, and 28,96% in step 5 compared to model processed with 3D scanner. Number of faces In the model processed with Metascan, on average, a superiority of 70,46% was observed in step 1, 100% in step 2, 79.76% in step 3, 25.83% in step 4, and 29.15% in step 5 compared to model processed with 3D scanner. Number of face corners In the model processed with Metascan, on average, a superiority of 70,43% was observed in step 1, 100% in step 2, 79.69% in step 3, 25.85% in step 4, and 29.15% in step 5 compared to model processed with 3D scanner. App evaluation The two applications taken into account present some comparable characteristics such as the processing time and the possibility of using cloud processing. 3D Scanner App offers also the possibility to process the model using a personal computer. Either with Metascan or 3D Scanner App it is possible to export the models in several formats or even export them in the form of short videos; it is even possible to edit the model with basic functions, such as crop function . Despite. having both a user-friendly interface, 3D Scanner App adds also a brief description of each file extension option in the sharing section which is helpful for novices. Regarding image capture each one of them offers the possibility to capture the images with autofocus, but in 3D Scanner App it is also possible to capture images by video recordings (auto capture function). Despite this interesting function that reduces shooting time, the photos are burdened by a lack of definition. Metascan offers the possibility to adjust the focus manually for every single photo which helps capturing specific target regions located more deeply with respect to the surrounding structures. The results of the evaluation of the two smartphone apps are summarized in Table . Visual inspection of the photogrammetric model Despite the significant difference of mesh density found among the models, in Model 1 the visual inspection of these models seems similar except for a small skin imperfection just over a scar (Fig. a, d). The complexity of the model increased in the subsequent steps and the differences at the visual inspection between the models generated by Metascan and 3D Scanner App became more evident. In Fig. (b, e) the skin anterior to the frontotemporal incision appears out of focus, while in Fig. (c, f) it is possible to see a significant image distortion in parieto-occipital region and along the skin lap. In Fig. (a, c) some small cortical vessels are missing, and there are two areas of image distortion along the skin incision in the model generated using 3D scanner. Finally, in Fig. (b, d), Posterior communicating artery (PcomA), Anterior choroidal artery (AchA), and their perforating branches are missing. Another important factor encountered during the visual inspection of the two models was the different zoom power that was greater in the models generated by Metascan. The results of visual inspection are listed in Figs. and . ) For each step, 4 models were generated, using both Metascan and 3D Scanner, for a total number of 40 photogrammetric models. In the models processed with Metascan, on average, a superiority of 69.06% was observed in step 1, 97.73% in step 2, 76.59% in step 3, 25.04% in step 4, and 28.62% in step 5 compared to model processed with 3D scanner. In the model processed with Metascan, on average, a superiority of 69.97% was observed in step 1, 99,28% in step 2, 78.58% in step 3, 25.57% in step 4, and 28,96% in step 5 compared to model processed with 3D scanner. In the model processed with Metascan, on average, a superiority of 70,46% was observed in step 1, 100% in step 2, 79.76% in step 3, 25.83% in step 4, and 29.15% in step 5 compared to model processed with 3D scanner. In the model processed with Metascan, on average, a superiority of 70,43% was observed in step 1, 100% in step 2, 79.69% in step 3, 25.85% in step 4, and 29.15% in step 5 compared to model processed with 3D scanner. The two applications taken into account present some comparable characteristics such as the processing time and the possibility of using cloud processing. 3D Scanner App offers also the possibility to process the model using a personal computer. Either with Metascan or 3D Scanner App it is possible to export the models in several formats or even export them in the form of short videos; it is even possible to edit the model with basic functions, such as crop function . Despite. having both a user-friendly interface, 3D Scanner App adds also a brief description of each file extension option in the sharing section which is helpful for novices. Regarding image capture each one of them offers the possibility to capture the images with autofocus, but in 3D Scanner App it is also possible to capture images by video recordings (auto capture function). Despite this interesting function that reduces shooting time, the photos are burdened by a lack of definition. Metascan offers the possibility to adjust the focus manually for every single photo which helps capturing specific target regions located more deeply with respect to the surrounding structures. The results of the evaluation of the two smartphone apps are summarized in Table . Despite the significant difference of mesh density found among the models, in Model 1 the visual inspection of these models seems similar except for a small skin imperfection just over a scar (Fig. a, d). The complexity of the model increased in the subsequent steps and the differences at the visual inspection between the models generated by Metascan and 3D Scanner App became more evident. In Fig. (b, e) the skin anterior to the frontotemporal incision appears out of focus, while in Fig. (c, f) it is possible to see a significant image distortion in parieto-occipital region and along the skin lap. In Fig. (a, c) some small cortical vessels are missing, and there are two areas of image distortion along the skin incision in the model generated using 3D scanner. Finally, in Fig. (b, d), Posterior communicating artery (PcomA), Anterior choroidal artery (AchA), and their perforating branches are missing. Another important factor encountered during the visual inspection of the two models was the different zoom power that was greater in the models generated by Metascan. The results of visual inspection are listed in Figs. and . The reconstruction of 3D anatomical models represents an innovative approach for anatomy education and learning, as interactive photogrammetric models can be navigated and visualized in augmented, virtual, or mixed-reality. Scientific evidence supports the use of these tools as an efficient path for anatomical understanding compared to traditional methods . Moreover, advancements in cloud-based processing have significantly reduced the computational resources required, facilitating the proliferation of smartphone applications created for this purpose. While numerous studies have been conducted on neuroanatomical photogrammetric models , there remains a lack of standardized quality control methods. In this study, we aimed to address this limitation by evaluating the quality of models through the analysis of mesh density, a quantifiable parameter commonly used for assessment. Our findings revealed that photogrammetric models generated using Metascan exhibited superior mesh density compared to those produced with 3D Scanner in each step. Additionally, visual inspection of the photogrammetric model confirmed the superiority of Metascan, as highlighted by a higher occurrence of digital artifacts, a term commonly used in photography to denote a loss of definition, particularly in soft tissue representation during initial steps of the pterional approach and in the graphic depiction of the parasellar region, where the rendering of structures such as the posterior communicating artery (Pcom), anterior communicating artery (AcoA), and their perforating branches was not achieved satisfactorily. By analyzing the percentage differences in mesh density between the two apps for each step, a reduction in Metascan's percentage superiority was observed, decreasing from 69% in step 1 to 28% in step 5 (Table 1). However, no reduction in the digital artifacts found in the models was observed. This may be due to the increase in surface area, depth of the dissection, and complexity of the model to be reconstructed in the subsequent steps. The value of mesh density found through Metascan has demonstrated to be enough for an accurate qualitative reproduction of neuroanatomical images. Moreover, achieving comprehensive coverage with overlapped photographs within a field poses a challenge, particularly in light of the non-linear nature of neuroanatomy. The cerebral surface, characterized by sulci, fissures, gyri, and convolutions, presents a complex topography. Similarly, the basicranium, neurovascular structures contained within cisternal spaces, and bony features such as foramina, ridges, and osseous canals contribute to the intricate three-dimensional landscape. Consequently, the development of tools capable of delivering high-fidelity reproductions of dissections is crucial. As previously described, the use of photogrammetry in Neuroanatomy offers numerous advantages, to the extent that it could potentially be supplanting traditional two-dimensional images in the future. However, it is essential to address the need for adequate quality control systems to ensure the accuracy and reliability of photogrammetric data. Errors in image registration can lead to erroneous conclusions and have significant implications in neuroanatomical research. For example, using the 3D model with numerous digital artifacts for measurements in neuroanatomical studies could affect the accuracy of the results. Therefore, method validation and the implementation of rigorous quality controls are essential to ensure scientifically valid results. A major limitation is represented by the comparison of only two among the various smartphone apps available. The choice of comparing these two apps was based on their application in the field of human anatomy , since on one hand they are the most cited in the literature up to now and, on the other hand, their free version does not require any adjunctive tools, unlike other apps . By enabling depth perception, capturing high-quality images and offering flexibility of viewpoints, photogrammetry provides researchers with unprecedented opportunities to explore and understand the intricate and magnificent structure of the brain. However, it is of paramount importance to develop and apply rigorous quality control systems to ensure data integrity and the reliability of findings for Neurological research. In particular, this study demonstrates the superiority of Metascan when it comes to processing photogrammetric models for neuroanatomical studies. Further studies should explore the availability of other quality control systems and evaluate the accuracy of linear measurements via photogrammetry.
Amorphous Drug–Polymer Salts: Maximizing Proton Transfer to Enhance Stability and Release
52545cde-d3ef-4e0f-9be6-8f71a9cfdc06
9906740
Pharmacology[mh]
An amorphous solid is more soluble than its crystalline counterpart. , In recent years, this principle has been applied to develop amorphous solid dispersions (ASDs) to deliver poorly soluble drugs. − An ideal ASD provides enhanced solubility over its crystalline counterpart and high stability against crystallization to maintain its solubility advantage. A recent progress in this area is the formulation of amorphous drug–polymer salts (ADPS). , An ADPS is formed by the acid–base reaction between a small-molecule drug and an oppositely charged polyelectrolyte. Relative to an ASD of neutral drug and polymer, an ADPS is more stable in a hot and humid environment, a need for many medicines for global health. This enhanced stability results from the strong ionic interaction between a drug and a polymer, which reduces the driving force for crystallization, and from the difficulty for the drug and the polymer to form a co-crystal. The increase of thermodynamic stability, at first glance, suggests reduced solubility, but excellent dissolution performance has been observed in biorelevant media for lumefantrine (LMF) and clofazimine (CFZ) formulated with poly(acrylic acid) (PAA) (see for the structures of LMF, CFZ, and PAA). , For an ADPS, the extent of acid–base reaction is a critical quality attribute. For a basic drug like LMF or CFZ, this refers to the fraction of the molecules that are protonated by an acidic polymer. Song et al. reported significant variation in the fraction of LMF molecules that were protonated by acidic polymers depending on the process condition. For example, in the formulations with PAA at 40 wt % drug loading, LMF was 5% protonated if prepared by hot-melt extrusion (HME) and 15% protonated by rotary evaporation (RE). These values indicate very low degrees of salt formation and a significant effect of the process condition. This effect is perhaps not surprising given the large size and low mobility of polymers, making a drug–polymer salt slower to form than a salt of small ions. In this work, we confirm the critical role of the process condition in forming a drug–polymer salt and demonstrate that nearly complete salt formation is possible under proper conditions. Many methods have been used to prepare ASDs, including HME, , spray drying (SD), and RE. , Our recent work introduced a low-cost slurry conversion method for synthesizing ADPS. In this method, a physical mixture of the drug and the polymer is stirred in the presence of a small amount of solvent, which is then removed. Compared to SD and RE, this method uses less solvent and does not require complete dissolution of the reactants; compared to HME, it uses a lower temperature, thus applicable to thermally labile polymers such as PAA. In this work, we apply the slurry method to prepare the amorphous salt of LMF and PAA and compare the product with those prepared by HME and RE. In addition, antisolvent precipitation is tested as another method of preparation. , Lumefantrine (LMF), the model drug of this study, is a low-solubility WHO Essential Medicine and first-line antimalarial. Jain et al. have shown that the bioavailability of LMF can be improved through an ASD formulation. Being a malaria medicine, LMF formulations should be stable under tropical conditions since many regions afflicted by malaria are hot and humid. This requirement can potentially be met using the approach of amorphous drug–polymer salts. As a weak base, LMF can be protonated by an acidic polymer like PAA. Hiew et al. investigated amorphous LMF formulated with several polymers. Their work did not include PAA and did not consider the impact of the process condition on LMF protonation, which are the focus of this study. We report that the amorphous formulations of LMF and PAA prepared by slurry conversion and antisolvent precipitation form a single trend where the degree of drug protonation increases with PAA concentration from zero for pure LMF to ∼100% above 70 wt % PAA. This profile holds regardless of the synthetic method and the PAA molecular weight (1.8, 450, and 4000 kg/mol) and thus describes the equilibrium condition for salt formation. Remarkably, the slurry conversion method achieved much more complete salt formation than HME and RE, highlighting the importance of process conditions in completing the proton transfer between the drug and the polymer. We find that a high degree of salt formation leads to improved stability and drug release. Materials Poly(acrylic acid) (PAA, Carbomer, M W = 1.8, 450, 4000 kg/mol) was purchased from Sigma-Aldrich (St. Louis, MO), lumefantrine (LMF) from Nanjing Bilatchem Industrial Co. (Nanjing, China), dichloromethane (ChromAR grade) from Thermo Fisher Scientific (Fair Lawn, NJ), and ethanol from Decon Laboratories (King of Prussia, PA). All materials were used as received. Amorphous Formulations of LMF and PAA Slurry Conversion The slurry synthesis of amorphous LMF-PAA has been described by Yao et al. In addition to the original synthesis temperature (75 °C), a reduced temperature of 25 °C was tested and we found that the products prepared after 30 min of reaction at 25 °C showed similar degrees of protonation as those prepared at 75 °C. The products were ground in an agate mortar with a pestle to a fine uniform powder prior to further analysis. For PAA of higher M W (450 and 4000 kg/mol), a reaction with LMF was performed using both the slurry method of Yao et al. and another method with more vigorous mixing. In the latter method, a physical mixture of LMF and PAA at a chosen drug loading (25, 50, 75 wt %) was combined with the solvent (dichloromethane/ethanol, 1:1 by volume) at a 4:1 solvent/solid ratio. The resulting paste was milled in a ball mill (MM400, Retsch GmbH, Haan, Germany). The container of the mill was a 25 mL capacity steel jar with five 5 mm stainless steel balls. The mill operated at 20 Hz and the milling time was 30 min. The milling was performed at room temperature, and the internal temperature was measured immediately after milling with an IR thermometer. The increase of the internal temperature was less than 5 °C. Melt Quenching To assess the effect of the degree of salt formation on formulation performance, amorphous LMF-PAA was prepared using a melt-quench method to simulate HME. A physical mixture of LMF and PAA 450 kg/mol was prepared at 50 wt % drug and heated to 135°C while stirring with a stainless steel spatula to mimic HME. The heating time was ∼4 min. The melt was cooled to room temperature by contact with an aluminum block. The product was ground in an agate mortar with a pestle to a fine powder before further analysis. Antisolvent Precipitation A solution of LMF in acetone (50 mg/mL) was added to an aqueous solution of PAA (3.5 mg/mL) under agitation via a magnetic stir bar, causing precipitation. The precipitant was filtered using Whatman Grade 2 Qualitative Filter Paper and dried under vacuum overnight at room temperature and ground in an agate mortar with a pestle to a fine powder before further analysis. Powder X-ray Diffraction X-ray diffraction patterns were collected using a Bruker D8 Advance X-ray diffractometer with a Cu Kα source operating at a tube load of 40 kV and 40 mA. A powder sample (∼10 mg) was spread and flattened on a Si (510) zero-background holder and scanned between 3 and 40° (2θ) at a step size of 0.02° and a scan rate of 1 s/step. X-ray Photoelectron Spectroscopy (XPS) The details of XPS measurement and data analysis have been described previously. For an amorphous LMF-PAA formulation, approximately 5 mg of powder was pressed into a tablet using a stainless steel press. For a sample of pure LMF, approximately 1 mg of LMF powder was melted on a glass coverslip and quenched to room temperature by contact with an Al block. The samples were stored in a sealed plastic tube filled with Drierite before analysis. The high-resolution spectrum of the N atom was used to measure the fraction protonated of LMF. For each sample, the N spectrum was recorded in duplicate in two separate regions. Curve fitting was performed using the program Origin following smart baseline subtraction. Dissolution Solubility tests were performed in simulated gastric fluid (SGF). The details of sample preparation, data collection, and analysis have been described previously. Poly(acrylic acid) (PAA, Carbomer, M W = 1.8, 450, 4000 kg/mol) was purchased from Sigma-Aldrich (St. Louis, MO), lumefantrine (LMF) from Nanjing Bilatchem Industrial Co. (Nanjing, China), dichloromethane (ChromAR grade) from Thermo Fisher Scientific (Fair Lawn, NJ), and ethanol from Decon Laboratories (King of Prussia, PA). All materials were used as received. Slurry Conversion The slurry synthesis of amorphous LMF-PAA has been described by Yao et al. In addition to the original synthesis temperature (75 °C), a reduced temperature of 25 °C was tested and we found that the products prepared after 30 min of reaction at 25 °C showed similar degrees of protonation as those prepared at 75 °C. The products were ground in an agate mortar with a pestle to a fine uniform powder prior to further analysis. For PAA of higher M W (450 and 4000 kg/mol), a reaction with LMF was performed using both the slurry method of Yao et al. and another method with more vigorous mixing. In the latter method, a physical mixture of LMF and PAA at a chosen drug loading (25, 50, 75 wt %) was combined with the solvent (dichloromethane/ethanol, 1:1 by volume) at a 4:1 solvent/solid ratio. The resulting paste was milled in a ball mill (MM400, Retsch GmbH, Haan, Germany). The container of the mill was a 25 mL capacity steel jar with five 5 mm stainless steel balls. The mill operated at 20 Hz and the milling time was 30 min. The milling was performed at room temperature, and the internal temperature was measured immediately after milling with an IR thermometer. The increase of the internal temperature was less than 5 °C. Melt Quenching To assess the effect of the degree of salt formation on formulation performance, amorphous LMF-PAA was prepared using a melt-quench method to simulate HME. A physical mixture of LMF and PAA 450 kg/mol was prepared at 50 wt % drug and heated to 135°C while stirring with a stainless steel spatula to mimic HME. The heating time was ∼4 min. The melt was cooled to room temperature by contact with an aluminum block. The product was ground in an agate mortar with a pestle to a fine powder before further analysis. Antisolvent Precipitation A solution of LMF in acetone (50 mg/mL) was added to an aqueous solution of PAA (3.5 mg/mL) under agitation via a magnetic stir bar, causing precipitation. The precipitant was filtered using Whatman Grade 2 Qualitative Filter Paper and dried under vacuum overnight at room temperature and ground in an agate mortar with a pestle to a fine powder before further analysis. The slurry synthesis of amorphous LMF-PAA has been described by Yao et al. In addition to the original synthesis temperature (75 °C), a reduced temperature of 25 °C was tested and we found that the products prepared after 30 min of reaction at 25 °C showed similar degrees of protonation as those prepared at 75 °C. The products were ground in an agate mortar with a pestle to a fine uniform powder prior to further analysis. For PAA of higher M W (450 and 4000 kg/mol), a reaction with LMF was performed using both the slurry method of Yao et al. and another method with more vigorous mixing. In the latter method, a physical mixture of LMF and PAA at a chosen drug loading (25, 50, 75 wt %) was combined with the solvent (dichloromethane/ethanol, 1:1 by volume) at a 4:1 solvent/solid ratio. The resulting paste was milled in a ball mill (MM400, Retsch GmbH, Haan, Germany). The container of the mill was a 25 mL capacity steel jar with five 5 mm stainless steel balls. The mill operated at 20 Hz and the milling time was 30 min. The milling was performed at room temperature, and the internal temperature was measured immediately after milling with an IR thermometer. The increase of the internal temperature was less than 5 °C. To assess the effect of the degree of salt formation on formulation performance, amorphous LMF-PAA was prepared using a melt-quench method to simulate HME. A physical mixture of LMF and PAA 450 kg/mol was prepared at 50 wt % drug and heated to 135°C while stirring with a stainless steel spatula to mimic HME. The heating time was ∼4 min. The melt was cooled to room temperature by contact with an aluminum block. The product was ground in an agate mortar with a pestle to a fine powder before further analysis. A solution of LMF in acetone (50 mg/mL) was added to an aqueous solution of PAA (3.5 mg/mL) under agitation via a magnetic stir bar, causing precipitation. The precipitant was filtered using Whatman Grade 2 Qualitative Filter Paper and dried under vacuum overnight at room temperature and ground in an agate mortar with a pestle to a fine powder before further analysis. X-ray diffraction patterns were collected using a Bruker D8 Advance X-ray diffractometer with a Cu Kα source operating at a tube load of 40 kV and 40 mA. A powder sample (∼10 mg) was spread and flattened on a Si (510) zero-background holder and scanned between 3 and 40° (2θ) at a step size of 0.02° and a scan rate of 1 s/step. The details of XPS measurement and data analysis have been described previously. For an amorphous LMF-PAA formulation, approximately 5 mg of powder was pressed into a tablet using a stainless steel press. For a sample of pure LMF, approximately 1 mg of LMF powder was melted on a glass coverslip and quenched to room temperature by contact with an Al block. The samples were stored in a sealed plastic tube filled with Drierite before analysis. The high-resolution spectrum of the N atom was used to measure the fraction protonated of LMF. For each sample, the N spectrum was recorded in duplicate in two separate regions. Curve fitting was performed using the program Origin following smart baseline subtraction. Solubility tests were performed in simulated gastric fluid (SGF). The details of sample preparation, data collection, and analysis have been described previously. Degree of Salt Formation in Amorphous LMF-PAA Prepared by Slurry Conversion shows the typical XPS spectra of the N atom collected to determine the degree of proton transfer (salt formation). These materials were prepared at different drug loading with PAA 450 kg/mol using the slurry conversion method and confirmed amorphous by X-ray diffraction (XRD). Yao et al. have shown that the glass transition temperatures of these materials were significantly elevated relative to those of the pure components (17 °C for LMF and 126 °C for PAA), consistent with salt formation; for example, at 50 wt % drug loading, the T g exceeded 130 °C. The pure drug, a free base, shows a single peak at 399 eV, corresponding to the unprotonated amine N. With increasing PAA concentration (decreasing drug loading), this peak decreases and a new peak emerges at 401.5 eV. The new peak corresponds to the protonated amine group. , Together, the spectra in indicate an increase in the protonated fraction of the drug with increasing PAA concentration. The fraction protonated of LMF is calculated from an XPS spectrum as follows 1 where A P and A N are the areas of the protonated and the neutral N peaks, respectively, obtained by curve fitting . Because XPS is a surface analytical tool with a probe depth of several nanometers, it is important to establish that the degree of salt formation measured by XPS is representative of the entire material, not just the surface region. For this, we compare in the drug concentrations in the bulk and at the surface for a series of materials prepared by slurry conversion. The bulk concentration was obtained from the initial amounts of LMF and PAA used for slurry synthesis. Since neither component was lost in this one-pot synthesis, the overall concentration of the product can be obtained from the initial amounts. The surface concentration was measured by XPS as follows 2 where w LMF is the weight fraction of LMF, k is the measured N/O ratio, M P is the molecular weight of the PAA monomer, and M LMF is the molecular weight of LMF. indicates that there is no significant difference between the drug concentrations at the surface and in the bulk. This is not surprising because before XPS analysis, each sample was ground to fine particles, exposing internal surfaces. According to Yu et al., the time for the surface composition to equilibrate is determined by the rate of polymer diffusion through the bulk and can be years or longer below the glass transition temperature. That is, even if a thermodynamic driving force exists for component segregation in the surface region, the kinetics are too slow to have a significant effect on our results and the degree of salt formation from XPS is representative of the bulk material. shows the protonated fraction of LMF molecules in the amorphous formulations with PAA of three M W s (1.8, 450, and 4000 kg/mol) prepared by slurry conversion. For each M W grade, the fraction protonated is plotted against drug loading. For PAA 1800 g/mol, the results correspond to the products of the standard slurry synthesis. For higher- M W PAA grades, the results correspond either to the products of the standard synthesis or to those prepared with more vigorous mixing. As discussed below, for formulations of high polymer content, enhanced mixing was needed to complete the proton transfer. The data in form a single trend with no significant difference between PAA of different M W s. This indicates that the acid–base reaction between LMF and PAA had reached equilibrium. Had the degree of salt formation been limited by kinetics, the larger, less mobile polymer would be slower to react, resulting in less complete salt formation. The simplest explanation for the “master curve” in is that the slurry synthesis allowed the reaction to reach equilibrium. Consistent with this view, the curve through the data points is a fit to a reaction model (see below). shows that the protonated fraction of LMF molecules increases as the PAA concentration increases (as drug loading decreases). The fraction is zero for the pure drug (a free base) and rises with the PAA concentration, approaching 100% above 70 wt % PAA. This trend is sensible since at a low PAA concentration, there are not enough acidic groups to neutralize all the basic drug molecules. The vertical line at w 0 = 88 wt % corresponds to one LMF molecule ( M W = 528.9 g/mol) per PAA monomer ( M W = 72.1 g/mol). The observed profile indicates that even when PAA monomers are in excess, not every monomer can react with a drug molecule. As noted above, some formulations required more vigorous mixing to reach reaction equilibrium than utilized in the standard slurry synthesis. This occurred at higher PAA M W and higher PAA concentration. We illustrate this in for PAA 4000 kg/mol. For this M W grade, significant gelling occurred upon addition of the solvent, making stirring difficult and the reaction less reproducible. In , we compare the protonation profiles of amorphous LMF prepared with PAA 4000 kg/mol using the standard slurry synthesis (open symbols) and with enhanced mixing in a Retsch mill (solid symbols). The standard synthesis yielded products with lower degrees of protonation and larger scatter, whereas the products formed with enhanced mixing had higher and tighter degrees of protonation. For this reason, the PAA 4000 kg/mol results in correspond to those obtained with enhanced mixing. A 4000 kg/mol polymer is a giant molecule, and it is not surprising that better mixing is required to complete its reaction with the drug. For PAA 450 kg/mol, the effect described above is less severe and noticeable only at high polymer concentrations (above 50 wt %). When a significant effect is noted, the results plotted in are those obtained with enhanced mixing. Amorphous Formulations of LMF and PAA by Antisolvent Precipitation To expand the survey of synthetic methods, we investigated antisolvent precipitation as an alternative approach to preparing amorphous LMF-PAA. This method is analogous to “coprecipitated amorphous dispersion” (cPAD) of Strotman and Schenck. In this method, each component was dissolved first (LMF in acetone and PAA in water) and the mixing of the two solutions induced precipitation. The precipitant was confirmed amorphous by XRD. As in the case of slurry conversion, antisolvent precipitation was performed using PAA of different M W s (1.8, 450, 4000 kg/mol) at different drug/polymer ratios that corresponded to 25, 50, and 75% drug loading. This “bottom-up” method, in principle, enables more complete mixing of the reactants than a “top-down” method like HME and slurry conversion. An issue with the precipitation method, however, is the unknown composition of the precipitant since some reactants may remain dissolved in the supernatant. In contrast, the composition of a slurry-prepared product is known from the initial amounts of the ingredients because no ingredient is lost in the one-pot synthesis. For this reason, the drug concentration in a precipitated product must be determined and we did so by XPS from the N/O atomic ratios as described previously . In , we compare the protonation profiles of the products of antisolvent precipitation (open symbols) and slurry conversion (solid symbols). For the slurry products, the results are the same as those in but we do not distinguish the PAA M W s since the data cluster together. Similarly, for the precipitated products, the PAA M W had no significant effect on the degree of protonation observed and we simply plot the results together without distinguishing the PAA M W s. shows that relative to slurry conversion, antisolvent precipitation consistently yielded products of high drug concentration (70–90 wt %), regardless of the initial drug/polymer ratio. This means a significant fraction of PAA did not precipitate with LMF but remained dissolved in the solution. This is caused by the high aqueous solubility of PAA. For this reason, the actual drug concentration in the precipitant did not correspond to the initial drug loading and must be determined post-isolation by XPS. It is interesting that the precipitated materials all had a composition close to w 0 (one LMF molecule per PAA monomer). Despite their narrower range of composition, the products of antisolvent precipitation join the same trend as those prepared by slurry conversion. This single trend supports the idea that both methods reached the equilibrium for the proton transfer between the drug and the polymer. Consistent with this view, an equilibrium reaction model yields a fitting curve that accounts for the observed data (see below). Between the two methods, slurry conversion provided continuous tunability of drug loading, whereas antisolvent precipitation yielded products of only high drug loading. For this reason, slurry conversion is the more versatile of the two and the method of choice for the remainder of this work. In , we compare the degrees of salt formation in amorphous LMF-PAA prepared by slurry conversion in this work and by HME and RE in the study of Song et al. In addition, a melt-quench formulation from this work is included. For a fair comparison, all these materials were prepared with PAA of the same M W (450 kg/mol). All the % protonated values in were obtained by XPS and prior to XPS analysis, each sample was milled to ensure that the internal composition was analyzed . It is noteworthy that our slurry-prepared formulations reached significantly higher degrees of salt formation than those by RE and HME. At 40% drug loading, the slurry method reached 85% drug protonation, while HME and RE 5 and 15%, respectively. This indicates that the drug–polymer reaction was incomplete in the latter two cases. This result is startling since HME and RE are standard methods for ASD manufacturing and reached very low degrees of salt formation. To investigate salt formation by HME, we prepared an amorphous formulation of LMF and PAA under conditions that mimic HME. This formulation was prepared at 50% drug loading using PAA 450 kg/mol; the ingredients were melted together and stirred in the molten state. This formulation reached 19% protonation (solid circle in ), which is broadly consistent with Song et al. HME values and significantly lower than the level reached by slurry synthesis. This comparison confirms the low degree of salt formation by HME and indicates the significant role of manufacturing methods and process conditions in completing the reaction between a drug and a polymer. Why is the proton transfer between LMF and PAA less complete in HME than in slurry conversion? In an HME process, the components are mixed through heat and mechanical agitation without the aid of a solvent. This might suggest that a solvent could facilitate the reaction, perhaps by reducing its kinetic barrier for mass transport. This notion is consistent with Song et al. observation of a more complete salt formation by RE than by HME. However, it cannot explain the large discrepancy between their RE product and our slurry product . The RE process of Song et al. used more solvent (50:1 liquid/solid ratio) than our slurry method (4:1). In the RE process, LMF and PAA were initially dissolved in a single solvent (DCM/methanol), which was then removed under vacuum. The larger amount of solvent used could increase the drying time and the likelihood of phase separation during drying. Despite these differences, the similarity between RE and slurry conversion suggests that the RE conditions could be modified to achieve more complete salt formation. Overall, the results presented in highlight the importance of the process condition in preparing amorphous formulations that have a consistent internal state of drug–polymer interactions. Later, we will explore the effect of a varying degree of salt formation on drug stability and release. Model for Equilibrium Protonation Profile Here, we describe a model for the equilibrium protonation profile of LMF by PAA, which was used to generate the fitting curves in , , and . Readers interested in the effect of the degree of protonation on drug performance can skip this section. This model assumes the following chemical equilibrium 3 where B stands for the LMF free base, HA is an average AA monomer, and BH + A – is an ion pair between LMF and an AA monomer. The equilibrium constant of the reaction is given by 4 where a s , a b , and a a are the activities of the ion pair, the free base, and the AA monomer, respectively. Expressing concentrations as mole fractions, we have a i = x i f i , where x i is the mole fraction of component i ( i = s, b, or a) and f i is its activity coefficient. An effective equilibrium coefficient can be defined 5 If represents a chemical equilibrium, K is a constant independent of the concentrations. But since the activity coefficients f i in general depend on concentrations, so does K eff . shows the experimentally determined K eff at each drug loading from the % protonation value. We find that K eff increases exponentially with x a0 , the total AA monomer mole fraction (neutral and deprotonated). While this increase can arise from the concentration dependence of all three activity coefficients, we speculate that the coefficient for the AA monomer f a makes the largest contribution. At a low polymer concentration, LMF molecules must compete for the reaction sites on the same polymer chain. This would be difficult, and an average AA monomer would have a low probability to react with LMF (low activity). At a high polymer concentration, many acidic groups are available to react with LMF, leading to a high probability of reaction (high activity). To generate the fitting curves in , , and , we solve at each drug loading with K eff as a parameter. In addition, we assume that K eff has an exponential dependence on x a0 : K eff = K 0 + α exp(β x a0 ), where K 0 , α, and β are fitting parameters. The good fits obtained support the conclusion that slurry conversion and antisolvent precipitation can achieve the equilibrium of proton transfer between LMF and PAA. Effect of Salt Formation on Stability and Drug Release To investigate the effect of salt formation on formulation performance, we studied the stability and dissolution of two amorphous LMF-PAA formulations that had identical drug loading (50 wt %) and PAA M W (450 kg/mol), but different degrees of salt formation. By slurry synthesis, we prepared a material with 70% protonation, and by melt quench, a material with 19% protonation. a shows the XPS spectra of these two materials. Note the prominent protonated N peak of the slurry-prepared material and the prominent unprotonated N peak for the melt-quenched material. Both materials were amorphous according to XRD. b shows the stability of these two materials against crystallization at 40 °C and 75% R.H. The slurry-prepared formulation remained amorphous after 540 days, whereas the melt-quenched material crystallized significantly after 30 days. This is fully consistent with our understanding of the effect of drug–polymer salt formation on stability. The salt formation between a drug and a polymer reduces the crystallization driving force to a greater extent than the mixing of a neutral drug with a neutral polymer. This comparison indicates the positive effect of more complete salt formation on stability. c shows the dissolution curves for the two amorphous formulations above in simulated gastric fluid (SGF). For comparison, the result is also shown for the crystalline drug. Relative to the crystals, both amorphous formulations show elevated concentrations for at least 8 h, but the slurry-prepared formulation reached significantly higher concentration (by a factor of 6) than the melt-quenched formulation. Considering their different degrees of protonation (70 and 19%, respectively), the results indicate a positive effect of salt formation on drug solubilization. It is noteworthy that the comparisons in b,c are between two amorphous materials of identical composition, but different degrees of salt formation. This strengthens the conclusion that more complete salt formation improves the stability and the drug release of an amorphous formulation of LMF and PAA. That the salt formation between LMF and PAA can simultaneously enhance stability and drug release might seem counterintuitive since high stability often leads to low solubility. In previous work, this dual enhancement has been observed for both LMF and CFZ formulated with PAA. Others have studied amorphous LMF formulations with polymers, , and for a series of polymers (excluding PAA), Hiew et al. noted that RE-prepared formulations containing protonated LMF tend to be more stable against crystallization but have worse dissolution performance. Their conclusion agrees with ours with respect to stability but not dissolution. To understand this, we note that the polymer of our formulation, PAA, was not in their study and could be an outlier for their trend. In addition, the dissolution medium is SGF in this study, but a phosphate buffer in their study. Further work is warranted to develop a unified understanding. Greater Protonating Power of PAA “Dimer” The results in indicate that PAA of different M W s (1.8–4000 kg/mol) have similar ability to protonate LMF. We now show that at a lower M W , PAA could have a greater protonating power. shows the degree of salt formation as a function of PAA M W at a fixed drug loading of 75%. At this drug loading, the polymer formulations show a similar degree of salt formation, ∼50%. We use maleic acid ( M W = 116.07 g/mol) as a mimic for a dimer of AA. An amorphous salt of LMF and maleic acid was prepared using a solvent evaporation method and was found to contain LMF that was 85% protonated. This suggests a possible increase of protonating power below M W ∼1 kg/mol. One explanation for this effect is that LMF is a larger molecule than a PAA monomer and binding to one monomer on a polymer chain blocks access to the adjacent monomers. For a free-moving dimer, however, this crowding effect is less severe. Despite this potential increase of protonating power at low M W , we do not advocate the use of a small-molecule counterion for salt formation because we would lose the stabilizing benefit of a polyelectrolyte. Yao et al. showed that amorphous particles of LMF formulated with PAA 450 kg/mol at 50 wt % drug loading remained free-flowing after 540 days at 40 °C and 75% R.H. In contrast, the same formulation prepared with maleic acid became a viscous liquid after 1 day under the same condition. This is a consequence of a large increase in the glass transition temperature of LMF by PAA while the same stabilizing effect is not achieved with an AA dimer. Salt Formation in LMF-PAA and CFZ-PAA compares the degrees of salt formation in the LMF-PAA system and in the CFZ-PAA system. Both formulations were prepared using the slurry method with PAA 450 kg/mol. Gui et al. determined the degree of salt formation in CFZ-PAA by visible absorption spectroscopy, taking advantage of the color change of CFZ upon protonation. At the same drug loading, CFZ is protonated to a greater extent than LMF. CFZ is almost fully protonated below 60 wt % drug loading, whereas LMF does so below 30 wt % drug loading. This demonstrates the important role of the drug molecule in the degree of salt formation that can be reached. It is unclear why CFZ is more easily protonated by PAA than LMF. The literature p K a values for the two molecules are 8.5 for LMF, and 8.4 (ref and 9.3 (ref , calculated value) for CFZ, which do not provide a convincing distinction of their basicity. CFZ is a marginally smaller molecule than LMF and could more easily pack around a PAA chain, perhaps facilitating salt formation. There is some evidence from spectroscopy and computer modeling that CFZ can be doubly protonated (see illustration at the bottom of ). Keswani et al. assign a p K a of 2.3 to this site, which suggests that it could not be protonated by PAA (p K a = 4.5). In the crystal structure of CFZ with citric acid, this site is observed to form a hydrogen bond with a carboxylic acid group without ionization, while the primary site is protonated and forms a hydrogen-bonded ion pair with a carboxylate ion. Similar multisite interactions could occur in CFZ-PAA, possibly aiding salt formation. It is interesting to note that in the crystals, the protonated LMF and CFZ each form a cyclic hydrogen-bonded ion pair with a carboxylate ion. In the fumarate salt of LMF, the ammonium group and the adjacent OH group form a cyclic hydrogen bond with both oxygen atoms of the carboxylate ion. In the carboxylate salts of CFZ, the imine N and the adjacent NH group are both hydrogen-bonded with one of the O atoms of the carboxylate ion. It is possible that similar hydrogen-bonded ion pairs occur in the amorphous phase of LMF-PAA and CFZ-PAA. It is of interest to consider the proton transfer behavior observed in this work in light of the empirical rule for predicting salt formation from the p K a difference, Δp K a , between the reactants. According to this rule, Δp K a > 4 ensures salt formation. This condition is met for PAA reacting with both LMF and CFZ (primary protonation) and the rule would predict proton transfer. Experimentally, we find that proton transfer does occur in these two systems, but the degree of proton transfer depends strongly on drug loading . This result is not surprising given that the rule is based on a survey of small molecules. For a polymer like PAA, the reaction with one acidic group will likely hinder the reaction with adjacent acidic groups, effectively reducing their acidity and limiting the degree of proton transfer. shows the typical XPS spectra of the N atom collected to determine the degree of proton transfer (salt formation). These materials were prepared at different drug loading with PAA 450 kg/mol using the slurry conversion method and confirmed amorphous by X-ray diffraction (XRD). Yao et al. have shown that the glass transition temperatures of these materials were significantly elevated relative to those of the pure components (17 °C for LMF and 126 °C for PAA), consistent with salt formation; for example, at 50 wt % drug loading, the T g exceeded 130 °C. The pure drug, a free base, shows a single peak at 399 eV, corresponding to the unprotonated amine N. With increasing PAA concentration (decreasing drug loading), this peak decreases and a new peak emerges at 401.5 eV. The new peak corresponds to the protonated amine group. , Together, the spectra in indicate an increase in the protonated fraction of the drug with increasing PAA concentration. The fraction protonated of LMF is calculated from an XPS spectrum as follows 1 where A P and A N are the areas of the protonated and the neutral N peaks, respectively, obtained by curve fitting . Because XPS is a surface analytical tool with a probe depth of several nanometers, it is important to establish that the degree of salt formation measured by XPS is representative of the entire material, not just the surface region. For this, we compare in the drug concentrations in the bulk and at the surface for a series of materials prepared by slurry conversion. The bulk concentration was obtained from the initial amounts of LMF and PAA used for slurry synthesis. Since neither component was lost in this one-pot synthesis, the overall concentration of the product can be obtained from the initial amounts. The surface concentration was measured by XPS as follows 2 where w LMF is the weight fraction of LMF, k is the measured N/O ratio, M P is the molecular weight of the PAA monomer, and M LMF is the molecular weight of LMF. indicates that there is no significant difference between the drug concentrations at the surface and in the bulk. This is not surprising because before XPS analysis, each sample was ground to fine particles, exposing internal surfaces. According to Yu et al., the time for the surface composition to equilibrate is determined by the rate of polymer diffusion through the bulk and can be years or longer below the glass transition temperature. That is, even if a thermodynamic driving force exists for component segregation in the surface region, the kinetics are too slow to have a significant effect on our results and the degree of salt formation from XPS is representative of the bulk material. shows the protonated fraction of LMF molecules in the amorphous formulations with PAA of three M W s (1.8, 450, and 4000 kg/mol) prepared by slurry conversion. For each M W grade, the fraction protonated is plotted against drug loading. For PAA 1800 g/mol, the results correspond to the products of the standard slurry synthesis. For higher- M W PAA grades, the results correspond either to the products of the standard synthesis or to those prepared with more vigorous mixing. As discussed below, for formulations of high polymer content, enhanced mixing was needed to complete the proton transfer. The data in form a single trend with no significant difference between PAA of different M W s. This indicates that the acid–base reaction between LMF and PAA had reached equilibrium. Had the degree of salt formation been limited by kinetics, the larger, less mobile polymer would be slower to react, resulting in less complete salt formation. The simplest explanation for the “master curve” in is that the slurry synthesis allowed the reaction to reach equilibrium. Consistent with this view, the curve through the data points is a fit to a reaction model (see below). shows that the protonated fraction of LMF molecules increases as the PAA concentration increases (as drug loading decreases). The fraction is zero for the pure drug (a free base) and rises with the PAA concentration, approaching 100% above 70 wt % PAA. This trend is sensible since at a low PAA concentration, there are not enough acidic groups to neutralize all the basic drug molecules. The vertical line at w 0 = 88 wt % corresponds to one LMF molecule ( M W = 528.9 g/mol) per PAA monomer ( M W = 72.1 g/mol). The observed profile indicates that even when PAA monomers are in excess, not every monomer can react with a drug molecule. As noted above, some formulations required more vigorous mixing to reach reaction equilibrium than utilized in the standard slurry synthesis. This occurred at higher PAA M W and higher PAA concentration. We illustrate this in for PAA 4000 kg/mol. For this M W grade, significant gelling occurred upon addition of the solvent, making stirring difficult and the reaction less reproducible. In , we compare the protonation profiles of amorphous LMF prepared with PAA 4000 kg/mol using the standard slurry synthesis (open symbols) and with enhanced mixing in a Retsch mill (solid symbols). The standard synthesis yielded products with lower degrees of protonation and larger scatter, whereas the products formed with enhanced mixing had higher and tighter degrees of protonation. For this reason, the PAA 4000 kg/mol results in correspond to those obtained with enhanced mixing. A 4000 kg/mol polymer is a giant molecule, and it is not surprising that better mixing is required to complete its reaction with the drug. For PAA 450 kg/mol, the effect described above is less severe and noticeable only at high polymer concentrations (above 50 wt %). When a significant effect is noted, the results plotted in are those obtained with enhanced mixing. To expand the survey of synthetic methods, we investigated antisolvent precipitation as an alternative approach to preparing amorphous LMF-PAA. This method is analogous to “coprecipitated amorphous dispersion” (cPAD) of Strotman and Schenck. In this method, each component was dissolved first (LMF in acetone and PAA in water) and the mixing of the two solutions induced precipitation. The precipitant was confirmed amorphous by XRD. As in the case of slurry conversion, antisolvent precipitation was performed using PAA of different M W s (1.8, 450, 4000 kg/mol) at different drug/polymer ratios that corresponded to 25, 50, and 75% drug loading. This “bottom-up” method, in principle, enables more complete mixing of the reactants than a “top-down” method like HME and slurry conversion. An issue with the precipitation method, however, is the unknown composition of the precipitant since some reactants may remain dissolved in the supernatant. In contrast, the composition of a slurry-prepared product is known from the initial amounts of the ingredients because no ingredient is lost in the one-pot synthesis. For this reason, the drug concentration in a precipitated product must be determined and we did so by XPS from the N/O atomic ratios as described previously . In , we compare the protonation profiles of the products of antisolvent precipitation (open symbols) and slurry conversion (solid symbols). For the slurry products, the results are the same as those in but we do not distinguish the PAA M W s since the data cluster together. Similarly, for the precipitated products, the PAA M W had no significant effect on the degree of protonation observed and we simply plot the results together without distinguishing the PAA M W s. shows that relative to slurry conversion, antisolvent precipitation consistently yielded products of high drug concentration (70–90 wt %), regardless of the initial drug/polymer ratio. This means a significant fraction of PAA did not precipitate with LMF but remained dissolved in the solution. This is caused by the high aqueous solubility of PAA. For this reason, the actual drug concentration in the precipitant did not correspond to the initial drug loading and must be determined post-isolation by XPS. It is interesting that the precipitated materials all had a composition close to w 0 (one LMF molecule per PAA monomer). Despite their narrower range of composition, the products of antisolvent precipitation join the same trend as those prepared by slurry conversion. This single trend supports the idea that both methods reached the equilibrium for the proton transfer between the drug and the polymer. Consistent with this view, an equilibrium reaction model yields a fitting curve that accounts for the observed data (see below). Between the two methods, slurry conversion provided continuous tunability of drug loading, whereas antisolvent precipitation yielded products of only high drug loading. For this reason, slurry conversion is the more versatile of the two and the method of choice for the remainder of this work. In , we compare the degrees of salt formation in amorphous LMF-PAA prepared by slurry conversion in this work and by HME and RE in the study of Song et al. In addition, a melt-quench formulation from this work is included. For a fair comparison, all these materials were prepared with PAA of the same M W (450 kg/mol). All the % protonated values in were obtained by XPS and prior to XPS analysis, each sample was milled to ensure that the internal composition was analyzed . It is noteworthy that our slurry-prepared formulations reached significantly higher degrees of salt formation than those by RE and HME. At 40% drug loading, the slurry method reached 85% drug protonation, while HME and RE 5 and 15%, respectively. This indicates that the drug–polymer reaction was incomplete in the latter two cases. This result is startling since HME and RE are standard methods for ASD manufacturing and reached very low degrees of salt formation. To investigate salt formation by HME, we prepared an amorphous formulation of LMF and PAA under conditions that mimic HME. This formulation was prepared at 50% drug loading using PAA 450 kg/mol; the ingredients were melted together and stirred in the molten state. This formulation reached 19% protonation (solid circle in ), which is broadly consistent with Song et al. HME values and significantly lower than the level reached by slurry synthesis. This comparison confirms the low degree of salt formation by HME and indicates the significant role of manufacturing methods and process conditions in completing the reaction between a drug and a polymer. Why is the proton transfer between LMF and PAA less complete in HME than in slurry conversion? In an HME process, the components are mixed through heat and mechanical agitation without the aid of a solvent. This might suggest that a solvent could facilitate the reaction, perhaps by reducing its kinetic barrier for mass transport. This notion is consistent with Song et al. observation of a more complete salt formation by RE than by HME. However, it cannot explain the large discrepancy between their RE product and our slurry product . The RE process of Song et al. used more solvent (50:1 liquid/solid ratio) than our slurry method (4:1). In the RE process, LMF and PAA were initially dissolved in a single solvent (DCM/methanol), which was then removed under vacuum. The larger amount of solvent used could increase the drying time and the likelihood of phase separation during drying. Despite these differences, the similarity between RE and slurry conversion suggests that the RE conditions could be modified to achieve more complete salt formation. Overall, the results presented in highlight the importance of the process condition in preparing amorphous formulations that have a consistent internal state of drug–polymer interactions. Later, we will explore the effect of a varying degree of salt formation on drug stability and release. Here, we describe a model for the equilibrium protonation profile of LMF by PAA, which was used to generate the fitting curves in , , and . Readers interested in the effect of the degree of protonation on drug performance can skip this section. This model assumes the following chemical equilibrium 3 where B stands for the LMF free base, HA is an average AA monomer, and BH + A – is an ion pair between LMF and an AA monomer. The equilibrium constant of the reaction is given by 4 where a s , a b , and a a are the activities of the ion pair, the free base, and the AA monomer, respectively. Expressing concentrations as mole fractions, we have a i = x i f i , where x i is the mole fraction of component i ( i = s, b, or a) and f i is its activity coefficient. An effective equilibrium coefficient can be defined 5 If represents a chemical equilibrium, K is a constant independent of the concentrations. But since the activity coefficients f i in general depend on concentrations, so does K eff . shows the experimentally determined K eff at each drug loading from the % protonation value. We find that K eff increases exponentially with x a0 , the total AA monomer mole fraction (neutral and deprotonated). While this increase can arise from the concentration dependence of all three activity coefficients, we speculate that the coefficient for the AA monomer f a makes the largest contribution. At a low polymer concentration, LMF molecules must compete for the reaction sites on the same polymer chain. This would be difficult, and an average AA monomer would have a low probability to react with LMF (low activity). At a high polymer concentration, many acidic groups are available to react with LMF, leading to a high probability of reaction (high activity). To generate the fitting curves in , , and , we solve at each drug loading with K eff as a parameter. In addition, we assume that K eff has an exponential dependence on x a0 : K eff = K 0 + α exp(β x a0 ), where K 0 , α, and β are fitting parameters. The good fits obtained support the conclusion that slurry conversion and antisolvent precipitation can achieve the equilibrium of proton transfer between LMF and PAA. To investigate the effect of salt formation on formulation performance, we studied the stability and dissolution of two amorphous LMF-PAA formulations that had identical drug loading (50 wt %) and PAA M W (450 kg/mol), but different degrees of salt formation. By slurry synthesis, we prepared a material with 70% protonation, and by melt quench, a material with 19% protonation. a shows the XPS spectra of these two materials. Note the prominent protonated N peak of the slurry-prepared material and the prominent unprotonated N peak for the melt-quenched material. Both materials were amorphous according to XRD. b shows the stability of these two materials against crystallization at 40 °C and 75% R.H. The slurry-prepared formulation remained amorphous after 540 days, whereas the melt-quenched material crystallized significantly after 30 days. This is fully consistent with our understanding of the effect of drug–polymer salt formation on stability. The salt formation between a drug and a polymer reduces the crystallization driving force to a greater extent than the mixing of a neutral drug with a neutral polymer. This comparison indicates the positive effect of more complete salt formation on stability. c shows the dissolution curves for the two amorphous formulations above in simulated gastric fluid (SGF). For comparison, the result is also shown for the crystalline drug. Relative to the crystals, both amorphous formulations show elevated concentrations for at least 8 h, but the slurry-prepared formulation reached significantly higher concentration (by a factor of 6) than the melt-quenched formulation. Considering their different degrees of protonation (70 and 19%, respectively), the results indicate a positive effect of salt formation on drug solubilization. It is noteworthy that the comparisons in b,c are between two amorphous materials of identical composition, but different degrees of salt formation. This strengthens the conclusion that more complete salt formation improves the stability and the drug release of an amorphous formulation of LMF and PAA. That the salt formation between LMF and PAA can simultaneously enhance stability and drug release might seem counterintuitive since high stability often leads to low solubility. In previous work, this dual enhancement has been observed for both LMF and CFZ formulated with PAA. Others have studied amorphous LMF formulations with polymers, , and for a series of polymers (excluding PAA), Hiew et al. noted that RE-prepared formulations containing protonated LMF tend to be more stable against crystallization but have worse dissolution performance. Their conclusion agrees with ours with respect to stability but not dissolution. To understand this, we note that the polymer of our formulation, PAA, was not in their study and could be an outlier for their trend. In addition, the dissolution medium is SGF in this study, but a phosphate buffer in their study. Further work is warranted to develop a unified understanding. The results in indicate that PAA of different M W s (1.8–4000 kg/mol) have similar ability to protonate LMF. We now show that at a lower M W , PAA could have a greater protonating power. shows the degree of salt formation as a function of PAA M W at a fixed drug loading of 75%. At this drug loading, the polymer formulations show a similar degree of salt formation, ∼50%. We use maleic acid ( M W = 116.07 g/mol) as a mimic for a dimer of AA. An amorphous salt of LMF and maleic acid was prepared using a solvent evaporation method and was found to contain LMF that was 85% protonated. This suggests a possible increase of protonating power below M W ∼1 kg/mol. One explanation for this effect is that LMF is a larger molecule than a PAA monomer and binding to one monomer on a polymer chain blocks access to the adjacent monomers. For a free-moving dimer, however, this crowding effect is less severe. Despite this potential increase of protonating power at low M W , we do not advocate the use of a small-molecule counterion for salt formation because we would lose the stabilizing benefit of a polyelectrolyte. Yao et al. showed that amorphous particles of LMF formulated with PAA 450 kg/mol at 50 wt % drug loading remained free-flowing after 540 days at 40 °C and 75% R.H. In contrast, the same formulation prepared with maleic acid became a viscous liquid after 1 day under the same condition. This is a consequence of a large increase in the glass transition temperature of LMF by PAA while the same stabilizing effect is not achieved with an AA dimer. compares the degrees of salt formation in the LMF-PAA system and in the CFZ-PAA system. Both formulations were prepared using the slurry method with PAA 450 kg/mol. Gui et al. determined the degree of salt formation in CFZ-PAA by visible absorption spectroscopy, taking advantage of the color change of CFZ upon protonation. At the same drug loading, CFZ is protonated to a greater extent than LMF. CFZ is almost fully protonated below 60 wt % drug loading, whereas LMF does so below 30 wt % drug loading. This demonstrates the important role of the drug molecule in the degree of salt formation that can be reached. It is unclear why CFZ is more easily protonated by PAA than LMF. The literature p K a values for the two molecules are 8.5 for LMF, and 8.4 (ref and 9.3 (ref , calculated value) for CFZ, which do not provide a convincing distinction of their basicity. CFZ is a marginally smaller molecule than LMF and could more easily pack around a PAA chain, perhaps facilitating salt formation. There is some evidence from spectroscopy and computer modeling that CFZ can be doubly protonated (see illustration at the bottom of ). Keswani et al. assign a p K a of 2.3 to this site, which suggests that it could not be protonated by PAA (p K a = 4.5). In the crystal structure of CFZ with citric acid, this site is observed to form a hydrogen bond with a carboxylic acid group without ionization, while the primary site is protonated and forms a hydrogen-bonded ion pair with a carboxylate ion. Similar multisite interactions could occur in CFZ-PAA, possibly aiding salt formation. It is interesting to note that in the crystals, the protonated LMF and CFZ each form a cyclic hydrogen-bonded ion pair with a carboxylate ion. In the fumarate salt of LMF, the ammonium group and the adjacent OH group form a cyclic hydrogen bond with both oxygen atoms of the carboxylate ion. In the carboxylate salts of CFZ, the imine N and the adjacent NH group are both hydrogen-bonded with one of the O atoms of the carboxylate ion. It is possible that similar hydrogen-bonded ion pairs occur in the amorphous phase of LMF-PAA and CFZ-PAA. It is of interest to consider the proton transfer behavior observed in this work in light of the empirical rule for predicting salt formation from the p K a difference, Δp K a , between the reactants. According to this rule, Δp K a > 4 ensures salt formation. This condition is met for PAA reacting with both LMF and CFZ (primary protonation) and the rule would predict proton transfer. Experimentally, we find that proton transfer does occur in these two systems, but the degree of proton transfer depends strongly on drug loading . This result is not surprising given that the rule is based on a survey of small molecules. For a polymer like PAA, the reaction with one acidic group will likely hinder the reaction with adjacent acidic groups, effectively reducing their acidity and limiting the degree of proton transfer. This study investigated the effects of different synthetic methods and process conditions on the degree of salt formation between the basic drug LMF and the acidic polymer PAA. The products of slurry conversion and antisolvent precipitation form a single trend where the degree of salt formation systematically increases with increasing PAA concentration, regardless of PAA’s molecular weight . The master trend represents the equilibrium for salt formation since a kinetically hindered reaction would be less complete for PAA of higher molecular weight. The master trend is well described by an equilibrium reaction model in further support of our conclusion. Remarkably, the literature methods of HME and RE reached far lower degrees of salt formation than the reaction equilibrium . This is significant since both HME and RE are standard methods for manufacturing amorphous solid dispersions. Their inability to complete the salt formation between a drug and a polymer calls for careful optimization of process conditions and characterization of the final product for quality control. We find that a high degree of salt formation has a positive effect on drug stability and release . Based on this work, we recommend slurry conversion as the method for preparing amorphous drug–polymer salts for its low cost, its ability to complete salt formation, and its ability to continuously adjust drug loading. This work has provided a vivid illustration of the extremely different physical states that an amorphous drug–polymer formulation can have because of a change in manufacturing method and process condition. The amorphous nature of a formulation might give the impression that the ingredients are uniformly mixed. But for the system studied here, the drug and the polymer can be almost fully reacted to form a salt or barely reacted at all, depending on the method of preparation . This translates to a significant difference in drug stability and release . The extreme variability of physical state attained by a drug–polymer formulation stems from the low mobility of macromolecules and the linking in a chain of reaction sites. Relative to a small counterion, reaction with a polyelectrolyte could be significantly slower. Consistent with this view, in our slurry method, PAA of the highest M W (4000 kg/mol) required more vigorous agitation to complete salt formation, especially when polymer concentration was high. Although this work focused on a system in which the drug and the polymer can ionize each other, the state of mixing is likely a general issue in developing amorphous solid dispersions, with strong impact on product performance.
Neonatal resuscitation practices in Italy: a survey of the Italian Society of Neonatology (SIN) and the Union of European Neonatal and Perinatal Societies (UENPS)
34dc30df-7cab-4938-b566-89174bfcce07
9164545
Pediatrics[mh]
Approximately 5–10% of all newborns need support for transition to initiate breathing and aerate the lungs, while less than 1% require advanced resuscitation at birth, including tracheal intubation, chest compressions, and medications . Inadequate cardiorespiratory support at birth may result in pulmonary damage and worsen ongoing hypoxia or ischemia, with the risk of aggravating patient outcomes . To improve resuscitation practices worldwide, the International Liaison Committee on Resuscitation (ILCOR) promulgates and regularly updates through rigorous and continuous review of scientific literature the consensus on both the science of neonatal resuscitation and recommendations for treatment. These updates have formed a basis from which individual countries drew up their own guidelines. Of these, the guidelines drawn up by the American Academy of Pediatrics (AAP)/American Heart Association (AHA) are notable and were adopted by the Italian Society of Neonatology (SIN) and its Task Force on Neonatal Resuscitation since 1994. To improve compliance with the guidelines and to promote adequate knowledge of newborn life support, SIN has invested heavily in neonatal resuscitation training sessions for national instructors, national and on-site neonatal resuscitation courses for practitioners and meetings and congresses on the topic. Issuing and disseminating recommendations is, however, not sufficient, so we set out to evaluate the degree of uptake of these recommendations. Therefore, the aim of our study was to assess the adherence to Neonatal Resuscitation Guidelines in Italian centres, and to compare the consistency of practice between level-I and level-II centres with the hope that the results of our study would help to shape the strategies for diffusion of the current guidelines according to actual needs. This cross-sectional study was conducted as an electronic, web-based survey involving all Italian birth centres. According to the 2021 Italian standards of perinatal care , level-I centres take care of low- and medium-risk pregnancies and assist both healthy newborns and infants with intermediate diseases. Centres with 500–999 births/year can admit infants with GA ≥36 weeks and BW ≥1900 g, while centres with ≥1000 births/year can admit infants with GA ≥34 weeks and BW ≥1750 g, if the weight is adequate for GA (> 10° centile and < 90° centile). However, at the time of the survey, Italian level-I units admitted infants with GA ≥34 weeks, and some of them infants with GA ≥32 weeks. Level-II centres, in addition to the first level of care, are responsible for high-risk pregnancies and assist newborn infants with complex pathologies in neonatal intensive-care beds. This Italian survey is part of a European survey on delivery room practices endorsed by the Union of European Neonatal and Perinatal Societies (UENPS) and SIN . The study was approved by the Padua Provincial Institutional Review Board and was declared not to be human subject research. The survey was anonymous and sent to the directors of all Italian birth centres between January and September 2020 by e-mail link ( www.surveymonkey.com ). A reminder was sent to non-responders every 2 weeks for a maximum of three times; if no answer was received, we contacted the participant by phone. The participant was considered a non-responder if no response was obtained after the phone call. The survey consisted of a 91-item questionnaire focusing on current Delivery Room (DR) practices of neonatal resuscitation . It was broken down into the following sections: a) epidemiological data, b) perinatal organization, c) equipment, d) procedures, e) ethics, and f) education. The questionnaire was prepared by a committee of experts in neonatal resuscitation and members of the SIN Task Force on Neonatal Resuscitation. The questions included multiple-choice, fill-in, and yes/no questions (the complete questionnaire can be consulted in the ). All returned questionnaires were reviewed by two researchers working separately to avoid duplication of data. Data were examined with descriptive analyses. Categorical data were expressed as numbers and percentages and continuous data as median and interquartile ranges (IQR). In the analysis, level-I and level-II centres were compared using χ 2 tests. Statistical analysis was performed using Stata 15 Statistical Package (StataCorp. 2017. Stata Statistical Software: Release 15 . College Station, TX: StataCorp LLC). In total, 418 neonatologists and paediatricians identified as directors of birth centres were invited to participate in the survey. The overall response rate to the questionnaire was 61.7% (258/418), 95.6% (110/115) for level-II centres and 49.0% (148/303) for level-I centres. Missing values to individual questions were always < 10%, except in 1 question (see text and tables). Of the participating centres, 49 (11.7%) were academic hospitals. Among these, 43 (39.1%) were level-II centres and 6 (4.0%) were level-I centres. In 2018, approximately 300,000 births occurred at the participating hospitals (about 70% of all Italian births). The median of births/centre was 1664 (IQR: 1250-2391) at level-II and 737 (IQR: 525–1035) at level-I centres. The main results of the survey are reported in Tables , , , and , broken down by questionnaire section. Participating level-II hospitals were able to provide nasal-CPAP and/or high-flow nasal cannulae (100%), mechanical ventilation (99.1%), HFOV (71.0%), inhaled nitric oxide (80.0%), therapeutic hypothermia (76.4%), and ECMO (8.2%). Nasal-CPAP and/or high-flow nasal cannulae and mechanical ventilation were available in 77.7 and 21.6% of the level-I centres, respectively. In 74.5% of the level-II centres, the lowest GA of assisted infants was 22–23 weeks, while among level-I centres the lowest GA of assisted infants was 32 weeks in 20.6% and 34 weeks in 53.9%. Multidisciplinary antenatal counselling was routinely offered to parents at 90.0% (90) of level-II hospitals, and 57.4% (85) of level-I hospitals ( p < 0.001). A neonatologist or paediatrician was required to attend all deliveries in about 60% of centres at each level. Significant differences between level-II and level-I centres were found mainly in antenatal counselling, composition of the resuscitation team for high-risk deliveries, team briefings before resuscitation, providers qualified with full resuscitation skills and role of the anaesthesiologist (Table ), lack of awareness about temperature (Table ), routine tracheal suction for non - vigorous neonates born through meconium - stained amniotic fluid, self-confidence, sodium bicarbonate use (Table ) and frequency of neonatal resuscitation courses (Table ). Overall, our survey documents good compliance with international guidelines on neonatal resuscitation in most Italian level-II and level-I centres. However, as expected, there are some differences, mostly in perinatal care. Before birth A lower number of births/centre and less complexity in the assisted cases account for the differences in how the levels approach the period ‘before birth’. Indeed, before delivering a very preterm infant or an infant with anticipated problems, counselling with parents is routinely performed more often at level-II centres, which expect to assist the most vulnerable newborn infants. For the same reason, paediatricians and neonatologists working in level-I hospitals are less trained for emergencies, so an anaesthesiologist is more frequently needed on the resuscitation team. Umbilical cord management Delaying cord clamping (DCC) at birth is associated with increased haemoglobin levels and better iron stores in newborn infants ; it favours the cardiovascular transition that occurs in the first minutes of life and decreases mortality among infants with GA < 37 weeks . Delayed or physiologically based cord clamping (i.e. aerating the lung before removing placental support) is provided to most vaginally delivered term infants, suggesting that these interventions were introduced in Italy after umbilical cord management recommendations were disseminated . However, in term elective caesarean section, delayed strategies decrease to 67.2% in level-II and 52.7% in level-I hospitals, indicating the need to implement umbilical cord management for these otherwise healthy newborn infants. As suggested by recent evidence, delaying cord clamping does not affect maternal blood losses . Temperature A high incidence of postnatal hypothermia has been reported in high- and low-resource countries. It remains an independent predictor of neonatal morbidity and mortality, especially in very preterm infants in all settings. The International Guidelines suggest that the temperature of newly born infants should be maintained between 36.5 and 37.5 °C after birth through admission and stabilization . Effective interventions to achieve this may include environmental temperature 23–25 °C, use of radiant warmers, exothermic mattresses, woollen or plastic caps, plastic wraps, humidified and heated gases . According to our survey, neonatologists in level-I centres are less aware than those in level-II of the importance of keeping the delivery room and operating room temperatures within the suggested ranges as part of the effective interventions to prevent thermal losses at birth. It is well known that therapeutic hypothermia should be started within the first 6 h of life for newborn infants at risk of hypoxic-ischemic encephalopathy . We observed good compliance with this practice in all Italian birth centres, in most of which ‘passive cooling’ is started within 1 h of life. Airways, ventilation, circulation and medications Since 2015, the International Guidelines have recommended against routine endotracheal suctioning of meconium-stained non-vigorous newborns, instead suggesting resuscitation with positive pressure ventilation. Our survey showed that both level-II and level-I centres had changed their practice accordingly; however, in level-II centres, routine tracheal suction is still performed more often, reflecting a greater confidence with the intubation manoeuvre. Effective ventilation is considered the most critical intervention for successful delivery room resuscitation . Most perinatal management of this aspect is comparable between birth centres, especially in the use of a ‘gentle’ approach to ventilation. In both level-II and level-I hospitals, a T-piece is preferred to a self-inflating bag to administer positive pressure ventilation (PPV), predominantly using a face mask as the first interface. However, in 12% of level-II centres, short binasal prongs are the first choice. This solution seems to offer some advantages over face masks in terms of reducing intubation in the delivery room . An air-oxygen blender and a pulse oximeter to guide oxygen titration are available in almost all Italian delivery rooms. Moreover, as recommended by the International Guidelines, a laryngeal mask is part of the equipment in more than 90% of the participating hospitals, to be used in the event of failure of face mask ventilation or intubation . On the other hand, an end-tidal CO 2 detector, which is considered the most reliable tool to identify the correct placement of the endotracheal tube , is only available in about 20% of the responding birth centres. This device could potentially decrease the number of intubation attempts and improve outcomes . Among technical aspects, fewer level-I neonatal resuscitation teams self-evaluated as excellent or good at performing the endotracheal intubation manoeuvre. By contrast, this is a well-acquired skill by level-II teams, who are expected to assist the most vulnerable newborn infants routinely. Since 2015, electrocardiography is recognized as an important adjunct for babies requiring resuscitation. Nevertheless, a 3-lead ECG Monitor is only available in a quarter of responding centres, showing limited adherence to the latest version of the guidelines, which recommend its use in infants needing advanced resuscitation . Finally, although sodium bicarbonate is no longer considered helpful during acute resuscitation , it is still used on occasion, especially in level-I centres. Ethics and education Our survey shows that awareness of ethical issues should be reinforced in Italy. Indeed, a difficult decision like the time-limit before stopping full resuscitation in severely asphyxiated infants is not supported by shared guidelines in about 50% of responding hospitals. Moreover, parents are involved in the decision-making process of resuscitation in only one third of Italian centres. These findings are in line with previous European studies . Ethics remains a delicate field that needs further research to help neonatal staff and parents deal with difficult situations. Courses on neonatal resuscitation are routinely held in more than 80% of Italian birth centres, and most of these follow the American or European guidelines. Frequency of retraining is also optimal in about 80% of centres, according to the 2020 recommendations, suggesting that among participants who have been trained in neonatal resuscitation, individual or team booster-training should occur more frequently than every 2 years in order to support retention of knowledge, skills, and behaviour. Strengths and limitations The strengths of the present study include the structured questionnaire prepared by a group of experts; the assessment of several areas of neonatal resuscitation; the high representativity of the sample, which accounts for about 70% of all Italian deliveries in 2018; and finally, the use of an online survey. This study has some limitations, starting from the consideration that conducting surveys in an online format runs the risk of selection bias . The response rate is at the same time a strength for level-II hospitals and a limitation for those at level-I. The limited response rate (49.0%) of level-I centres may restrict the generalizability of our findings. Finally, as only the directors of neonatal wards were involved, the results may mirror the opinions of this very restricted group of clinicians; nevertheless, the questionnaire was structured to limit this risk. A lower number of births/centre and less complexity in the assisted cases account for the differences in how the levels approach the period ‘before birth’. Indeed, before delivering a very preterm infant or an infant with anticipated problems, counselling with parents is routinely performed more often at level-II centres, which expect to assist the most vulnerable newborn infants. For the same reason, paediatricians and neonatologists working in level-I hospitals are less trained for emergencies, so an anaesthesiologist is more frequently needed on the resuscitation team. Delaying cord clamping (DCC) at birth is associated with increased haemoglobin levels and better iron stores in newborn infants ; it favours the cardiovascular transition that occurs in the first minutes of life and decreases mortality among infants with GA < 37 weeks . Delayed or physiologically based cord clamping (i.e. aerating the lung before removing placental support) is provided to most vaginally delivered term infants, suggesting that these interventions were introduced in Italy after umbilical cord management recommendations were disseminated . However, in term elective caesarean section, delayed strategies decrease to 67.2% in level-II and 52.7% in level-I hospitals, indicating the need to implement umbilical cord management for these otherwise healthy newborn infants. As suggested by recent evidence, delaying cord clamping does not affect maternal blood losses . A high incidence of postnatal hypothermia has been reported in high- and low-resource countries. It remains an independent predictor of neonatal morbidity and mortality, especially in very preterm infants in all settings. The International Guidelines suggest that the temperature of newly born infants should be maintained between 36.5 and 37.5 °C after birth through admission and stabilization . Effective interventions to achieve this may include environmental temperature 23–25 °C, use of radiant warmers, exothermic mattresses, woollen or plastic caps, plastic wraps, humidified and heated gases . According to our survey, neonatologists in level-I centres are less aware than those in level-II of the importance of keeping the delivery room and operating room temperatures within the suggested ranges as part of the effective interventions to prevent thermal losses at birth. It is well known that therapeutic hypothermia should be started within the first 6 h of life for newborn infants at risk of hypoxic-ischemic encephalopathy . We observed good compliance with this practice in all Italian birth centres, in most of which ‘passive cooling’ is started within 1 h of life. Since 2015, the International Guidelines have recommended against routine endotracheal suctioning of meconium-stained non-vigorous newborns, instead suggesting resuscitation with positive pressure ventilation. Our survey showed that both level-II and level-I centres had changed their practice accordingly; however, in level-II centres, routine tracheal suction is still performed more often, reflecting a greater confidence with the intubation manoeuvre. Effective ventilation is considered the most critical intervention for successful delivery room resuscitation . Most perinatal management of this aspect is comparable between birth centres, especially in the use of a ‘gentle’ approach to ventilation. In both level-II and level-I hospitals, a T-piece is preferred to a self-inflating bag to administer positive pressure ventilation (PPV), predominantly using a face mask as the first interface. However, in 12% of level-II centres, short binasal prongs are the first choice. This solution seems to offer some advantages over face masks in terms of reducing intubation in the delivery room . An air-oxygen blender and a pulse oximeter to guide oxygen titration are available in almost all Italian delivery rooms. Moreover, as recommended by the International Guidelines, a laryngeal mask is part of the equipment in more than 90% of the participating hospitals, to be used in the event of failure of face mask ventilation or intubation . On the other hand, an end-tidal CO 2 detector, which is considered the most reliable tool to identify the correct placement of the endotracheal tube , is only available in about 20% of the responding birth centres. This device could potentially decrease the number of intubation attempts and improve outcomes . Among technical aspects, fewer level-I neonatal resuscitation teams self-evaluated as excellent or good at performing the endotracheal intubation manoeuvre. By contrast, this is a well-acquired skill by level-II teams, who are expected to assist the most vulnerable newborn infants routinely. Since 2015, electrocardiography is recognized as an important adjunct for babies requiring resuscitation. Nevertheless, a 3-lead ECG Monitor is only available in a quarter of responding centres, showing limited adherence to the latest version of the guidelines, which recommend its use in infants needing advanced resuscitation . Finally, although sodium bicarbonate is no longer considered helpful during acute resuscitation , it is still used on occasion, especially in level-I centres. Our survey shows that awareness of ethical issues should be reinforced in Italy. Indeed, a difficult decision like the time-limit before stopping full resuscitation in severely asphyxiated infants is not supported by shared guidelines in about 50% of responding hospitals. Moreover, parents are involved in the decision-making process of resuscitation in only one third of Italian centres. These findings are in line with previous European studies . Ethics remains a delicate field that needs further research to help neonatal staff and parents deal with difficult situations. Courses on neonatal resuscitation are routinely held in more than 80% of Italian birth centres, and most of these follow the American or European guidelines. Frequency of retraining is also optimal in about 80% of centres, according to the 2020 recommendations, suggesting that among participants who have been trained in neonatal resuscitation, individual or team booster-training should occur more frequently than every 2 years in order to support retention of knowledge, skills, and behaviour. The strengths of the present study include the structured questionnaire prepared by a group of experts; the assessment of several areas of neonatal resuscitation; the high representativity of the sample, which accounts for about 70% of all Italian deliveries in 2018; and finally, the use of an online survey. This study has some limitations, starting from the consideration that conducting surveys in an online format runs the risk of selection bias . The response rate is at the same time a strength for level-II hospitals and a limitation for those at level-I. The limited response rate (49.0%) of level-I centres may restrict the generalizability of our findings. Finally, as only the directors of neonatal wards were involved, the results may mirror the opinions of this very restricted group of clinicians; nevertheless, the questionnaire was structured to limit this risk. This survey provides insight into neonatal resuscitation practices in a large sample of Italian hospitals. Overall, adherence to the International Guidelines on neonatal resuscitation was high, but we also saw some divergences in practice, ethical choices, and training among the participating centres. Clinicians and stakeholders should consider this information when allocating resources and planning Italian perinatal programmes. The areas of the current guidelines that require further implementation include aspects of perinatal care, respiratory support, ethics and training. This goal can be achieved by defining new educational strategies, including quality improvement projects, simulation-based training and interventions in communication technology. Additional file 1: Supplementary file 1. PDF Questionnaire, complete survey questionnaire. Additional file 2: Supplementary file 2. List of participating centres, complete list of participating centres.
Title-molecular diagnostics of dystrophinopathies in Sri Lanka towards phenotype predictions: an insight from a South Asian resource limited setting
f34afd8d-b0b2-414f-8430-b6ba143f59f3
10775540
Pathology[mh]
Duchenne muscular dystrophy (DMD), OMIM #310,200 and Becker muscular dystrophy (BMD), OMIM #300,376 are X-linked recessive disorders caused by pathogenic variations in the DMD gene (OMIM *300,377, HGNC ID: 29). These conditions are collectively referred to as dystrophinopathies . The prevalence of DMD and BMD, according to a recent meta-analysis, is 4.8 per 100,000 and 1.6 per 100,000, respectively . It is important to note that mutations in the DMD gene predominantly affect males. However, there have been reports indicating that a percentage ranging from 2.5% to 7.8% of females have also been affected by these mutations, thereby being classified as symptomatic carriers. . Receiving an accurate diagnosis of dystrophinopathy is crucial to avoid the lengthy and somber diagnostic odyssey. Even though the average age of diagnosis for DMD remained over a decade as 5 years, it has been reported that the average age of diagnosis in Europe has decreased below the age of 3 years, reflecting the impact of enhanced access to molecular diagnostics and increased primary physician awareness . In contrast, diagnostic delays in DMD persist with notable frequency within traditionally marginalized populations encompassing individuals hailing from developing nations and those of a lower socioeconomic stratum . It can be challenging to differentiate DMD from BMD at a younger age. In this case, the "reading frame rule" can help aid differential diagnosis, where DMD patients typically show out-of-frame deletions, whereas BMD patients typically show in-frame deletions . The frame-shift hypothesis can predict the occurrence of DMD in 90% of cases and BMD in 94% of instances, with about 10% of genetic variations not adhering to the reading frame rule . Exceptions to the reading frame rule highlight the intricacy of the condition and show that factors other than the reading frame affect how the dystrophin protein is expressed. These factors include the type of variation, where it is located within the DMD gene, and its size . When evaluating the results of a molecular diagnosis to characterize dystrophinopathies, this cumulative impact of the type, size and localization of the variation is of importance. To the best our knowledge, the present study is the first and the largest comprehensive genetic analysis of a cohort of 236 clinically suspected pediatric and adult myopathy cohort in a geographically defined South Asian population; Sri Lanka, using a combined approach of Multiplex PCR (mPCR) and Multiplex Ligation Dependent Probe Amplification (MLPA). The aims of this study are: (i) to determine the frequency and distribution of DMD gene variants (deletions/duplications) in Sri Lanka through the utilization of a combined approach involving mPCR followed by MLPA and compare to the international literature and (ii) to determine the applicability of the "reading frame rule" in Sri Lankan DMD/BMD patient population. Patient recruitment A total of 236 patients [Age range (Mean); 1.5–42 Yrs (9 Yrs); Gender (Male-233:Female-3)] exhibiting characteristic clinical findings of Muscular Dystrophy were enrolled in the study from 2014 to 2022. Clinical diagnosis was based on the diagnostic recommendations by Bushby et al. . Sociodemographic characteristics and clinical data of the patients were documented using a standard questionnaire and clinical batteries that included North Star Ambulatory Assessment (NSAA), Vignos Scale, Brook Scale and Medical Research Council Scale (MRC). Three females [age-9 Yrs (family history of elevated CPK; 9596 U/L, and NSAA-27/34), 10 Yrs (elevated CPK; 6786 U/L, NSAA- 23/34, no family history of symptoms) and 16 Yrs (elevated CPK; 3725 U/L, wheel chair bound at 15 Yrs of age and no family history of symptoms)], were too enrolled to assess the symptomatic carrier status. Recruitment was conducted through neurology clinics in various government hospitals across Sri Lanka's Western, North-Western, North Central, Central, Southern, and Northern Provinces, as well as through pro bono mobile clinics and home visits. These patients were referred to the Interdisciplinary Center for Innovation in Biotechnology and Neuroscience (ICIBN) of the University of Sri Jayewardenepura until 2020, and then to the Institute for Combinatorial Advanced Research and Education (KDU-CARE), General Sir John Kotelawala Defence University (KDU), Sri Lanka for genetic testing. Every participant provided written informed consent, where applicable. The assent of a proxy was obtained for patients unable to provide their own. This study adheres to the ethical standards of Sri Lankan institutional review boards that follow the Helsinki Declaration (Ethical Approval Nos. 449/09 and 38/19 from The Ethics Review Committee, Faculty of Medical Sciences, University of Sri Jayewardenepura, and Ethical Approval No. LRH/D/06/2007 Lady Ridgeway Hospital for Children, Sri Lanka). Molecular Diagnostics This study utilized the molecular diagnostic approach described in Wijekoon et al. under the same corresponding author . A summary of this approach is as follows. The initial diagnostic test for detecting deletions and duplications followed a level one testing approach, utilizing Multiplex PCR (mPCR) for 20 exons covering proximal and distal hot-spot regions of the DMD gene as described by Chamberlain et al. and Beggs et al. followed by the MLPA assay (MRC Holland SALSA MLPA Probe mixes P034 and P035) for all the clinically diagnosed dystrophinopathy patients. The diagnostic procedure was established utilizing the primary molecular diagnostic recommendations as outlined by Abbs et al. , as well as the revised edition by Fratter et al. , in alignment with the European Molecular Quality Genetics Network's (EMQN) optimal practice guidelines for genetic testing in dystrophinopathies . To ascertain the impact of variations on the reading frame, the frame-shift checker available on the Leiden Muscular Dystrophy website ( www.dmd.nl ) was utilized to scrutinize all identified deletions and duplications where the number of patients who are following and not following the reading frame rule was identified. The comparative effectiveness of mPCR and MLPA was assessed by examining the individual capabilities of each method in detecting deletions and deletion boundaries of the DMD gene in genetically confirmed patients. Comparative analysis with existing literature data from various countries representing diverse geographical regions A literature review was conducted to compare the findings of this study, including the percentages of DMD gene deletion/duplication and the mean age of confirmatory molecular diagnosis, with existing literature data. The following method was employed for the literature review. The review process was structured into three primary stages: title screening, abstract screening, and document screening. A comprehensive search was conducted in globally recognized databases such as PubMed, Medline, Scopus, Embase, and Springer to identify relevant literature. The search was conducted using a combination of key words: Duchenne Muscular Dystrophy, Becker Muscular Dystrophy, Mutation pattern, MLPA, and Diagnostic delay. A total of 861 publications were identified. All titles underwent a screening process, resulting in the selection of 275 documents for abstract screening. A total of 275 abstracts were reviewed, and 120 articles were identified as potentially meeting the inclusion criteria related to Duchenne Muscular Dystrophy, Becker Muscular Dystrophy, MLPA, Mutation pattern, and Diagnostic delay. Ultimately, a comprehensive evaluation was conducted on the complete text of all 120 documents that were retained. This evaluation adhered to the same set of criteria for inclusion and exclusion as the initial screening of abstracts. As a result, a total of 49 papers were deemed suitable for inclusion in the subsequent comparative analysis. Data analysis Our findings on DMD gene variation types, hotspot locations, and deletion/duplication percentages, age at molecular diagnosis were compared with available data in literature from countries representing different geographical regions that have utilized the same molecular diagnostic protocol as our study. To test whether geographical region of various patient populations has an effect on the deletion/duplication percentages, the available country-specific data from the literature were graphically analyzed using boxplots followed by an ANOVA test. Those declared significantly different by ANOVA ( p < 0.05) were then also studied using Tukey’s pairwise comparisons test. A comparative analysis was conducted to examine the mean age of confirmatory molecular diagnosis of DMD across countries representing Low and Middle-Income levels, as compared to countries representing High-Income levels. This analysis involved the use of boxplots to graphically represent the data. To ensure statistical power, the analysis categorized countries into two groups: Low and Middle-Income countries, and High-Income countries. This was necessary due to the limited availability of data on the mean age of confirmatory molecular diagnosis of DMD in only a few countries. Statistical analysis was performed using R Statistical software version 4.2. A total of 236 patients [Age range (Mean); 1.5–42 Yrs (9 Yrs); Gender (Male-233:Female-3)] exhibiting characteristic clinical findings of Muscular Dystrophy were enrolled in the study from 2014 to 2022. Clinical diagnosis was based on the diagnostic recommendations by Bushby et al. . Sociodemographic characteristics and clinical data of the patients were documented using a standard questionnaire and clinical batteries that included North Star Ambulatory Assessment (NSAA), Vignos Scale, Brook Scale and Medical Research Council Scale (MRC). Three females [age-9 Yrs (family history of elevated CPK; 9596 U/L, and NSAA-27/34), 10 Yrs (elevated CPK; 6786 U/L, NSAA- 23/34, no family history of symptoms) and 16 Yrs (elevated CPK; 3725 U/L, wheel chair bound at 15 Yrs of age and no family history of symptoms)], were too enrolled to assess the symptomatic carrier status. Recruitment was conducted through neurology clinics in various government hospitals across Sri Lanka's Western, North-Western, North Central, Central, Southern, and Northern Provinces, as well as through pro bono mobile clinics and home visits. These patients were referred to the Interdisciplinary Center for Innovation in Biotechnology and Neuroscience (ICIBN) of the University of Sri Jayewardenepura until 2020, and then to the Institute for Combinatorial Advanced Research and Education (KDU-CARE), General Sir John Kotelawala Defence University (KDU), Sri Lanka for genetic testing. Every participant provided written informed consent, where applicable. The assent of a proxy was obtained for patients unable to provide their own. This study adheres to the ethical standards of Sri Lankan institutional review boards that follow the Helsinki Declaration (Ethical Approval Nos. 449/09 and 38/19 from The Ethics Review Committee, Faculty of Medical Sciences, University of Sri Jayewardenepura, and Ethical Approval No. LRH/D/06/2007 Lady Ridgeway Hospital for Children, Sri Lanka). This study utilized the molecular diagnostic approach described in Wijekoon et al. under the same corresponding author . A summary of this approach is as follows. The initial diagnostic test for detecting deletions and duplications followed a level one testing approach, utilizing Multiplex PCR (mPCR) for 20 exons covering proximal and distal hot-spot regions of the DMD gene as described by Chamberlain et al. and Beggs et al. followed by the MLPA assay (MRC Holland SALSA MLPA Probe mixes P034 and P035) for all the clinically diagnosed dystrophinopathy patients. The diagnostic procedure was established utilizing the primary molecular diagnostic recommendations as outlined by Abbs et al. , as well as the revised edition by Fratter et al. , in alignment with the European Molecular Quality Genetics Network's (EMQN) optimal practice guidelines for genetic testing in dystrophinopathies . To ascertain the impact of variations on the reading frame, the frame-shift checker available on the Leiden Muscular Dystrophy website ( www.dmd.nl ) was utilized to scrutinize all identified deletions and duplications where the number of patients who are following and not following the reading frame rule was identified. The comparative effectiveness of mPCR and MLPA was assessed by examining the individual capabilities of each method in detecting deletions and deletion boundaries of the DMD gene in genetically confirmed patients. A literature review was conducted to compare the findings of this study, including the percentages of DMD gene deletion/duplication and the mean age of confirmatory molecular diagnosis, with existing literature data. The following method was employed for the literature review. The review process was structured into three primary stages: title screening, abstract screening, and document screening. A comprehensive search was conducted in globally recognized databases such as PubMed, Medline, Scopus, Embase, and Springer to identify relevant literature. The search was conducted using a combination of key words: Duchenne Muscular Dystrophy, Becker Muscular Dystrophy, Mutation pattern, MLPA, and Diagnostic delay. A total of 861 publications were identified. All titles underwent a screening process, resulting in the selection of 275 documents for abstract screening. A total of 275 abstracts were reviewed, and 120 articles were identified as potentially meeting the inclusion criteria related to Duchenne Muscular Dystrophy, Becker Muscular Dystrophy, MLPA, Mutation pattern, and Diagnostic delay. Ultimately, a comprehensive evaluation was conducted on the complete text of all 120 documents that were retained. This evaluation adhered to the same set of criteria for inclusion and exclusion as the initial screening of abstracts. As a result, a total of 49 papers were deemed suitable for inclusion in the subsequent comparative analysis. Our findings on DMD gene variation types, hotspot locations, and deletion/duplication percentages, age at molecular diagnosis were compared with available data in literature from countries representing different geographical regions that have utilized the same molecular diagnostic protocol as our study. To test whether geographical region of various patient populations has an effect on the deletion/duplication percentages, the available country-specific data from the literature were graphically analyzed using boxplots followed by an ANOVA test. Those declared significantly different by ANOVA ( p < 0.05) were then also studied using Tukey’s pairwise comparisons test. A comparative analysis was conducted to examine the mean age of confirmatory molecular diagnosis of DMD across countries representing Low and Middle-Income levels, as compared to countries representing High-Income levels. This analysis involved the use of boxplots to graphically represent the data. To ensure statistical power, the analysis categorized countries into two groups: Low and Middle-Income countries, and High-Income countries. This was necessary due to the limited availability of data on the mean age of confirmatory molecular diagnosis of DMD in only a few countries. Statistical analysis was performed using R Statistical software version 4.2. Demographic Characteristics of patient cohort A total of 236 patients [Age range (Mean); 1.5–42 Yrs (9 Yrs); Gender (Male-233:Female-3)] exhibiting characteristic clinical findings of Muscular Dystrophy (Clinically diagnosed DMD-215, Clinically diagnosed BMD-21) were subjected to DMD gene deletion/ duplication analysis by mPCR and MLPA. Table is a summary of demographic characteristics of the patient cohort. Utility of mPCR and MLPA in the molecular diagnostics of the patient cohort In the entire patient cohort ( n = 236), mPCR solely was able to identify deletions in the DMD gene in 131/236 patients (DMD-120, BMD-11). In the same cohort, MLPA confirmed deletions in 149/236 patients [DMD-138, BMD -11]. Importantly deletion boundaries could be accurately detected by mPCR in a total of 100/236 (42%) patients. These findings suggest that mPCR has a detection rate of 95% (131/138) among all patients who received a diagnosis. Eighteen additional cases (18/236- 7.6%) (Detetions-5, Duplications-13) could be genetically diagnosed by MLPA over mPCR. The remaining 87 patients (37%) were negative for MLPA. Table provides a summary of molecular diagnostic results achieved by mPCR and MLPA. Additional file : Table S1 summarizes the additional mutations and deletion borders identified by MLPA over mPCR (see Additional file : Table S1). DMD gene deletions and duplications patterns in the Sri Lankan cohort Table provides a summary of the deletion and duplication variations, and their locations within the DMD gene. We observed clustering of deletion mutations in the exon 45–55 and 6–15 regions of the DMD gene and clustering of duplications in the exon 6–10 in our patient population (Fig. ). Comparative analysis of DMD gene deletion and duplication locations, percentages and the mean age of confirmatory molecular diagnosis, with existing literature data We compared the variation hotspots of our cohort to the information available in the literature, as shown in Fig. . It was clear that South Asians represented a similar distal variation hotspot spanning exon 45–56 with the exception of the Netherlands, wherein the variations ranged from (exon 8–61). Exon 45–56 variation hotspot was consistent with the distal variation hotspots of the nations in South East Asia, East Asia, Europe, the USA–Canada, Latin America, the Middle East, and Africa. Furthermore, Peru, a country in Latin America, and Indonesia, a country in South East Asia, both had distinctive proximal hotspots ranging from (exon 18–30) and (exon 19–35), respectively. The proximal hotspots of Eastern European countries were notably spanning from (exon 45–49). Although it was reported that duplications could occur at random anywhere in the DMD gene, the comparative analysis allowed us to determine that in the majority of populations, duplications are concentrated in exons between (exon 2–20). For duplications ranging from (exon 50–79), (exon 42–55), and (exon 45–50), respectively, Iran from the Middle East, Taiwan from East Asia and, the African region stood out as unique clusters of duplications. When the deletion and duplication percentages of studies from various geographical regions are compared, it was evident that the duplication percentages were significantly different ( p < 0.05) among populations in South Asia Vs East Asia and South East Asia Vs East Asia. This is illustrated graphically in Fig. . The percentages of deletion, however, were not significantly different among populations in different geographical regions where South Asia Vs East Asia ( p = 0.06) and South Asia Vs Europe ( p = 0.06) were showing trends. A comparative analysis was conducted on the mean age of confirmatory molecular diagnosis of DMD across countries representing Low and Middle-Income levels, namely Sri Lanka (our study), India , Thailand , Iran , Nepal , and Africa , versus to countries representing High-Income levels, including the USA , Eastern Europe, and Western Europe(5), based on the available literature. A significant difference ( p = 0.001) was observed in the average age of confirmatory molecular diagnosis of DMD between Low and Middle-Income countries and High-Income Countries, as illustrated in Fig. . The applicability of the "reading frame rule" in Sri Lankan DMD/BMD patient population Upon determining the impact of variations in the DMD gene on the reading frame in our population, it was observed that 117/138 (84.7%) DMD cases were attributed to out of frame variations, while 17/138 (12.3%) exhibited in frame variations. The observed hotspot for in-frame variation for DMD within our population was identified as Exon 45–60. Interestingly, it was observed that 15/17 (88.2%) that did not adhere to the reading frame rule were associated with global developmental delay. This is described in Table . A total of 236 patients [Age range (Mean); 1.5–42 Yrs (9 Yrs); Gender (Male-233:Female-3)] exhibiting characteristic clinical findings of Muscular Dystrophy (Clinically diagnosed DMD-215, Clinically diagnosed BMD-21) were subjected to DMD gene deletion/ duplication analysis by mPCR and MLPA. Table is a summary of demographic characteristics of the patient cohort. In the entire patient cohort ( n = 236), mPCR solely was able to identify deletions in the DMD gene in 131/236 patients (DMD-120, BMD-11). In the same cohort, MLPA confirmed deletions in 149/236 patients [DMD-138, BMD -11]. Importantly deletion boundaries could be accurately detected by mPCR in a total of 100/236 (42%) patients. These findings suggest that mPCR has a detection rate of 95% (131/138) among all patients who received a diagnosis. Eighteen additional cases (18/236- 7.6%) (Detetions-5, Duplications-13) could be genetically diagnosed by MLPA over mPCR. The remaining 87 patients (37%) were negative for MLPA. Table provides a summary of molecular diagnostic results achieved by mPCR and MLPA. Additional file : Table S1 summarizes the additional mutations and deletion borders identified by MLPA over mPCR (see Additional file : Table S1). Table provides a summary of the deletion and duplication variations, and their locations within the DMD gene. We observed clustering of deletion mutations in the exon 45–55 and 6–15 regions of the DMD gene and clustering of duplications in the exon 6–10 in our patient population (Fig. ). We compared the variation hotspots of our cohort to the information available in the literature, as shown in Fig. . It was clear that South Asians represented a similar distal variation hotspot spanning exon 45–56 with the exception of the Netherlands, wherein the variations ranged from (exon 8–61). Exon 45–56 variation hotspot was consistent with the distal variation hotspots of the nations in South East Asia, East Asia, Europe, the USA–Canada, Latin America, the Middle East, and Africa. Furthermore, Peru, a country in Latin America, and Indonesia, a country in South East Asia, both had distinctive proximal hotspots ranging from (exon 18–30) and (exon 19–35), respectively. The proximal hotspots of Eastern European countries were notably spanning from (exon 45–49). Although it was reported that duplications could occur at random anywhere in the DMD gene, the comparative analysis allowed us to determine that in the majority of populations, duplications are concentrated in exons between (exon 2–20). For duplications ranging from (exon 50–79), (exon 42–55), and (exon 45–50), respectively, Iran from the Middle East, Taiwan from East Asia and, the African region stood out as unique clusters of duplications. When the deletion and duplication percentages of studies from various geographical regions are compared, it was evident that the duplication percentages were significantly different ( p < 0.05) among populations in South Asia Vs East Asia and South East Asia Vs East Asia. This is illustrated graphically in Fig. . The percentages of deletion, however, were not significantly different among populations in different geographical regions where South Asia Vs East Asia ( p = 0.06) and South Asia Vs Europe ( p = 0.06) were showing trends. A comparative analysis was conducted on the mean age of confirmatory molecular diagnosis of DMD across countries representing Low and Middle-Income levels, namely Sri Lanka (our study), India , Thailand , Iran , Nepal , and Africa , versus to countries representing High-Income levels, including the USA , Eastern Europe, and Western Europe(5), based on the available literature. A significant difference ( p = 0.001) was observed in the average age of confirmatory molecular diagnosis of DMD between Low and Middle-Income countries and High-Income Countries, as illustrated in Fig. . Upon determining the impact of variations in the DMD gene on the reading frame in our population, it was observed that 117/138 (84.7%) DMD cases were attributed to out of frame variations, while 17/138 (12.3%) exhibited in frame variations. The observed hotspot for in-frame variation for DMD within our population was identified as Exon 45–60. Interestingly, it was observed that 15/17 (88.2%) that did not adhere to the reading frame rule were associated with global developmental delay. This is described in Table . Based on our current understanding, this is the first and the largest study to use both mPCR and MLPA to conduct a genetic analysis on a cohort of 236 clinically suspected pediatric and adult myopathy patients in Sri Lanka (Table ). Utility of mPCR and MLPA in molecular diagnostics The utilization of MLPA is presently regarded as a labor-efficient primary method for detecting deletions and duplications of single or multiple exons in the DMD gene, as approximately 70% of dystrophinopathy patients exhibit such genetic alterations . Despite not being the primary molecular diagnostic method in developed nations, mPCR remains a viable and economical option for detecting deletions, and is, therefore, utilized in many laboratories situated in countries with limited resources . In our study as represented in Table and Supplementary Table 01, mPCR could provide molecular diagnosis for 55% (131/236) of the patients, of which 76% (100/136) of the patients the exact deletion boundaries accurately detected by mPCR. In line with our findings, studies conducted in South India and North India identified (103/150) 68% and (161/217) 74%, respectively, as the mutation detection Percentage by mPCR. Intriguingly in our study 95% (131/138) of the patients with deletion mutations could be diagnosed by mPCR. In line with our findings, Nouri et al., identified a deletion detection rate of 95% for mPCR in an Iranian population. On average, the cost of MLPA is estimated to be five times higher than that of mPCR. The utilization of MLPA as the principal screening technique within the framework of a developing nation would involve considerable costs. Hence, the proposed approach of employing mPCR as the primary step, followed by MLPA, is a prudent and precise method to efficiently proceed with the genetic diagnosis of DMD in settings with limited resources as previously described by Murugan et al. for South India . Using MLPA diagnostics (Table ), we were able to identify 82/138 (60%) patients as amenable to available exon skipping therapies. Interestingly, the majority of our patients were eligible for exon 51 skipping (30/82), followed by exon 53 skipping (19/82) and exon 45 skipping (19/82), which is consistent with data reported in the South Asian region, including North India , Tamil Nadu , and Pakistan. The authors extensively discussed the identification of DMD patients who can benefit from exon skipping therapies in a previous study by Wijekoon et al. in 2023. Therefore, this paper does not extensively discuss this topic . Table presents our findings of newly identified DMD variants in our study population, which have not been previously documented in the Leiden Muscular Dystrophy Gene Variant Database. According to reports, the introduction of genetic material from distinct geographical populations has the potential to augment genetic diversity and potentially engender novel genotypic configurations within populations that are not isolated or indigenous . The island of Sri Lanka, situated at the southernmost tip of South Asia and along the proposed Southern migration route, has been inhabited by diverse ethnic groups such as Portuguese, Dutch, British, and Arabs. This presents an interesting opportunity to gain a distinct perspective on the initial settlement of the subcontinent . In this context, we can hypothesize that the genetic admixtures that have taken place in Sri Lanka may have contributed to the emergence of the novel DMD variants that have been detected in our population. Considering there have been reports of phenotypic differences in individuals carrying the same variation , healthcare practitioners should be more cautious when interpreting genomic results in the clinical environment for patients with similar variations and familial cases. Since distinct dystrophin-expressing tissues and cells may behave differently to specific defective dystrophins, such diversity is more common in BMD patients . Furthermore, it is important to highlight that age can be confounded with phenotypic diversity when comparing patients with the same variation or familial cases. Therefore, it is advisable to take into account clinical batteries that are adjusted for age, such as the Weschler Intelligence Scale (WISC-IV). In this regard have conducted a comprehensive analysis discussing the relationship between serum proteomics profile, cognitive assessment using WISC scores, and DMD gene mutations. In this scenario, we have siblings in our patient cohort who have the same variation (out-of-frame deletion at exon 1–42), and whose Full-Scale IQ (FSIQ) as measured by the WISC-IV is 85 and 94, respectively. Additionally, one sibling with the identical variation was reported to have a motor development delay and scored FSIQ-85, but the other sibling did not (Table ). Furthermore, the WISC-IV scores of another two distinct patients with identical variation (out-of-frame deletion at exon 45–52) were 83 and 67 for FSIQ, respectively. It's interesting that, despite the fact that this variation (deletion at exon 45–52) is likely to impact how brain dystrophin isoform Dp140 is expressed, only one patient outperformed the other on the WISC, indicating an intellectual deficit. These will serve as evidence of how complex it is to interpret genetic data in a clinical setting. BMD patients with DMD variants in exon 1–8 and exon 41–45 impacting the Actin Binding Domain (ABD) and R16/R17 nNOS-binding domains are said to have a more severe presentation of BMD . We were unable to thoroughly evaluate this claimed connection due to the accumulation of DMD variants exclusively from exon 45–49 in our BMD cohort. Additionally, there have been reports on the comorbidity of Moyamoya disease , Frontometaphyseal Dysplasia , and Rippling muscle disease with DMD. These findings highlight the intricacy of assessing the phenotypes of patients with dystrophinopathy in a clinical setting and underscore the need for the implementation of whole exome sequencing (WES) in the evaluation of dystrophinopathy patients exhibiting complex phenotypes and negative results on MLPA testing. Deletion, duplication percentages and their location in the DMD gene Accordingly as summarized in Table , 58% (136/236) of the cases in our sample were due to deletions as analyzed by MLPA [DMD- 90% (125/138) and BMD- 100% (11/11)], and 6% (13/236) of the cases (all DMD) were due to duplications. The DMD gene has a higher degree of allelic heterogeneity compared to many genes, due to the spontaneous mutation rate and large size, with 79 exons spanning 2.2 Mb, and hot spots for deletion mutations. One or more exons are deleted in 60–65% of DMD patients and 85% of BMD patients, respectively . Data from the literature demonstrate that deletion and duplication percentages vary across various populations. When deletion percentages for various ethnicities are taken into account (Fig. ), East Asians; Japan (61%) , Taiwan (36%) , China (58% and 71%) and Korea (46% and 72%) , demonstrate a trend of lower deletion percentages in the DMD gene compared to South Asians (p = 0.06); Sri Lanka (90%), pan India (73%-91%) , , and Pakistan (87%) and European countries; Netherlands (63%) , Italy (65%) , Spain (71%) , Hungary (67%) , Poland (61%) , Russia 49% and France (67%) . However, Algeria; a Northern African county has a deletion percentage of 77% . In contrary, a study by Selvaciti et al. on an overall group of 258 patients from Eastern European countries (Bosnia, Bulgaria, Croatia, Hungary, Lithuania, Poland, Rumania, Serbia, Ukraine and Cyprus) identified a lower deletion percentage (27%) (Fig. ). Elhavary et al. proposed that during Ancient Islamic times, Muslim immigration from the Levant and Africa, coupled with intermarriage, contributed to the reinforcement of gene flow of the DMD gene among the Saudi population. Turkey; a Middle Eastern country crossroads between Europe and Asia, has been found to exhibit a complex genetic makeup resulting from admixture with populations from the Balkans, Caucasus, Middle East, and Europe. Notably, genetic analyses have revealed a closer genetic affinity of the Turkish population to Europeans . In this context, findings of two studies conducted by Cavdarli et al. and Ulgenalp et al. reported a deletion percentage of 92.4% and 63.7% in the Turkish population, respectively . These percentages were observed to be higher than the deletion percentage reported in the Saudi population, which was 46.3%. According to Elhavary et al. 2019, the higher percentage of deletions observed in Turkey compared to Saudi Arabia may be attributed to the admixture of Turkish populations with those of European descent . However, a study conducted by Toksoy et al. 2019 on Turkish patients with DMD report a deletion percentage of 48.8%, which is similar to the percentage observed in the Saudi community . Therefore, it is crucial to conduct further investigations on the hypothesis that European admixture results in higher deletion percentages. This can be achieved by studying larger patient cohorts, particularly those from South Asia (populations from the Indian Subcontinent) who were long-term subjects of Portuguese, Dutch, and British colonialism may provide unique resources to investigate. However, it is important to note that the existing literature presents differing conclusions regarding the European admixture with the populations from Indian Subcontinent. According to the findings of Reich et al. in 2009, it was determined that the Ancestral North Indians (ANI) exhibit genetic similarities with individuals from the Middle East, Central Asia, and Europe . Conversely, the "Ancestral South Indians" (ASI) were found to be genetically distinct from ANI. According to a study conducted by Neus Font-Porterias et al. in 2019, it was determined that the potential ancestral group of the proto-Roma, which is the largest transnational ethnic minority in Europe, can be traced back to a Punjabi group with minimal levels of West Eurasian ancestry . Furthermore, the same study has revealed the presence of a multifaceted West Eurasian element, comprising approximately 65% of the Roma population. This finding can be attributed to the intermingling that transpired between non-proto-Roma groups and the Roma community during the period spanning from 1270 to 1580. Intriguingly, a recent study conducted by Perera et al. 2021 examined the four major ethnic groups in Sri Lanka, namely Sinhalese, Sri Lankan Tamils, Indian Tamils, and Moors. The study found that all Sri Lankan ethnicities, with the exception of Indian Tamils, exhibited a close clustering with populations from the Indian Bhil tribe, Bangladesh, and Europe. This clustering pattern suggests a shared Indo-Aryan ancestry among these populations . Although consanguineous marriages are infrequent in Western societies, when Middle Eastern populations are considered; Iran (consanguinity: 50.7% in urban, 86.2% in rural) Riyadh from Saudi Arabia (Consanguinity 80.6% in Samtah and 62.8% in Riyadh) shows significant deletion percentages in DMD gene reported as 80% in Iran and 78% in Riyadh correspondingly. Elhavary et al. suggest that the observed higher consanguinity rate in Riyadh may have a link with the increased DMD deletion rates (77.8%) observed in Riyadh . Moreover, Algeria, a country in Northern Africa, has reported a higher rate of consanguinity (36.6%) . Selvaciti et al. reported a noteworthy finding that Algerian patients exhibit a higher percentage (77%) of DMD deletions compared to Eastern Europeans , whose mutations are primarily nonsense (31%) followed by deletions (29%). Notably consanguinity account for 20–50% of marriages in various parts of Africa and Asia, particularly in South Asia . In Pakistan and the southern portions of India, consanguineous marriages account for around 70% and 23% of all marriages, respectively . In this context, it is possible to infer that consanguinity may have played a role in the higher rates of DMD gene deletions observed in South Asians, Africans, and Middle Eastern countries. However, it is important to note that while consanguineous unions can result in a higher occurrence of autosomal recessive disorders, there is ample evidence to suggest that consanguinity does not elevate the risk for autosomal dominant conditions or X-linked recessive conditions . The available scientific evidence does not provide a strong basis for linking the higher rates of DMD gene deletions observed in South Asians, Africans, and Middle Eastern countries to consanguinity. In this context, the increased frequencies of deletions in the DMD gene may be attributed to various mechanisms that are involved in the formation of genomic rearrangements . Non-allelic homologous recombination (NAHR) is an important mechanism that can explain the frequencies of deletions and duplications . The NAHR mechanism, specifically the crosslinking of Alu repeats, has been implicated as a causal factor in deletions affecting various genes, including the DMD gene . However, it is important to note that if NAHR caused both deletions and duplications, it would anticipate comparable frequencies of deletions and duplications for each intron. However, this is not observed in the case of DMD . Thereby it is reported that nonrecurrent events typically do not arise through NAHR. Instead, nonhomologous end joining (NHEJ), which involves the ligation of double-strand breaks, is often suggested as a mechanism for nonrecurrent intragenic deletions and duplications . Several studies have provided supporting evidence for this in DMD through the sequencing of deletion breakpoint junctions in the DMD gene. Moreover, it has also been proposed that duplications may occur at various stages of the cellular cycle. Similar to point mutations, deletions are primarily inherited from the maternal lineage, whereas duplications are passed down through the paternal germ line . When the variation hotspots of our cohort were compared to the information available in the literature (Fig. ), distinct distal hotspot has been identified for Netherlands, which ranged from (exon 8–61). This observed variation may have been influenced by clustering, which is connected to locally constrained gene flow across significant Dutch rivers and to country-wide ancestry gradients from neighboring territories . For duplications ranging from (exon 50–79) and (exon 45–50), respectively, Iran from the Middle East, and, the African region stood out as unique hotspots. The observed uniqueness in duplication hotspots in Iran and Africa may be due to the high consanguinity rates associated with these populations. The higher deletion frequency in the distal hotspot region (Fig. ) and low duplication frequency observed in South Asians may provide insight into the feasibility of implementing conventional molecular diagnostic approaches such as mPCR, which can easily detect about 90% of the deletions in the hotspot region. Thus, it is proposed to develop tailored molecular diagnostic algorithms that are regional and population-specific and easily implemented in low resource settings. . Delay in onset of the symptoms to molecular diagnosis It is notable that diagnostic delays persist in traditionally disadvantaged groups, such as patients from developing countries and with lower socioeconomic status, because access to subspecialty care and genetic testing is difficult for patients from developing countries , including Sri Lanka . (Fig. ). It is noteworthy that the average age of patients receiving their first clinical evaluation in our cohort was four years (Table ), the same as the age of onset of symptoms. This is in contrast to data reported in India ; [Age at symptom onset—(3.7 ± 1.9 years), Age at first clinical evaluation – (8.1 ± 2.5 years)], China ; [Age at symptom onset- (3 years), Age at first clinical evaluation- (6–8 years)] Saudi Arabia; [Age at symptom onset- (1–3 years), Age at first clinical evaluation- (9–12 years)]. Despite the fact that the average age of symptom onset in our patient cohort was 4 years (Table ), when the first clinical examination was performed, only 21% of patients (29/138) were referred for molecular diagnostics before the age of 5 years. In our cohort, this has increased the average age of referral to molecular diagnostics to 7.8 years, indicating a delay in receiving an accurate diagnosis. The observed delay may be attributable to the following: (1) The time required for referral to a specialist (Neurologist/ Pediatric Neurologist) and the difficulty in obtaining access to crucial diagnostic tests that must be performed at a government tertiary care hospital, (2) Lack of awareness among clinicians regarding the significance of molecular diagnostics as the gold standard for DMD confirmation; and (3) Neurogenetic testing is almost nonexistent in Sri Lankan government hospitals and only available at exorbitant costs in a few private sector centers. To the best of our knowledge, the neuromolecular diagnostic service established by the corresponding author at a government institute is the first of its kind to offer free molecular diagnostics for certain neuromuscular and neurodegenerative diseases. Moreover, CPK screening remains as the initial approach in testing for muscular dystrophies in resource limited settings where molecular diagnosis is not frequently available at a reduced cost. In our study, the majority of patients in our patient cohort underwent CPK evaluation during their initial visit to the healthcare professional (Table ). This evaluation prompted their enrollment in the molecular diagnostic program, which provides genetic confirmation at no cost. The authors have previously addressed the relationship between age, mutation pattern and CPK levels in this patient cohort in a comprehensive manner, as documented in Wijekoon et al., 2023 . Therefore, further discussion on this topic will not be included in the present paper. Thus CPK screening may be suggested in primary care as an approach in suspected early diagnosis of dystrophinopathy in resource limited settings which should be followed by a confirmatory molecular diagnostic approach, which will further reduce the diagnostic delay. Despite being one of the most comprehensive studies conducted to date on dystrophinopathies in Sri Lanka, we acknowledge the following limitations in our study. N = 87 patients were negative for MLPA analysis; however, due to limited infrastructure and financial constraints, we could not perform genome sequencing for the MLPA negative cases, which we open up for future international collaboration. In addition, carrier detection for the mothers and female siblings of the probands was only conducted in a limited number of cases at the request of the consultant neurologist/pediatric neurologist due to the lack of genetic counselling services within the Sri Lankan health care system. DMD gene variations interpretation from genetic report to clinic and the reading frame rule It is important to remember that false positive results in MLPA can occur due to failed primer/probe binding, especially in single exon deletions. In this regard, according to Kim et al. 2016, MLPA has been found to have a false positive rate of approximately 15% in cases of large gene rearrangements that affect a single exon . In addition, Buitrago and colleagues documented a 40% rate of false-positive results among individuals who were identified as having mutations affecting single exons through MLPA testing . Hence, it is recommended that medical professionals in the clinic take into account the variation detection procedure utilized before drawing any conclusions about a single exon deletion . The European Molecular Quality Genetics Network's (EMQN) best practice recommendations for genetic testing in dystrophinopathies include reconfirming single exon deletions discovered in MLPA by PCR . In this analysis, we found n = 28 (20%) DMD patients with single exon deletions, with exon 44 and exon 51 being the most commonly deleted single exons (Table ). Following EMQN protocols, all single exon deletions were re-confirmed by Multiplex PCR before being reported to the clinic. To predict potential Duchenne or Becker effects of the deletion or duplication discovered using DNA data, the "reading frame rule" has gained popularity . However, since deviations to the reading frame rules are frequent, the predictive sensitivity of the "reading frame rule" has been questioned. It can be difficult for medical practitioners to judge whether to classify a patient as having Duchenne or mild to moderate Becker by merely interpreting the reading frame, since "leaky" variations that are initially out-of-frame are found to produce low quantities of dystrophin, which will reduce the disease severity by 3–4% . As presented in Table , in our population, a total of 17 out of 149 mutations, representing 11.4%, were found to be non-compliant with the reading frame rule. The comparative analysis of our value revealed that it is higher than the percentages reported in various regions including Tamil Nadu, India (3.9%) , Bangalore, India (8.4%), Saudi Arabia (5.6%) , France (4%) , Italy (5.4%) , Brazil (9.6%) , TREAT-NMD DMD Global database (7%) , UMD-DMD database (4%) , and Leiden database (9%) . However, our value is lower than the values reported in China (13.6%) and Spain (15%) . Mateu et al. report that deletions exhibit a relatively low number of exceptions to the reading frame rule, whereas duplications and point mutations tend to have a greater probability of exceptions to the reading frame. In contrast, our cohort exhibited a reading frame exception rate of 8.7% for deletions and 2.6% for duplications. Nonetheless, the analysis of point mutations was not feasible in the present investigation, a constraint that we duly recognize. When in-frame variations are evaluated further, it is reported in the literature that in-frame variations encoded by exons 64–70, 2–10, and 32–35 are associated with a DMD phenotype, as in-frame variations bordering the aforementioned regions will not produce a functional dystrophin protein . Contrarily, 94% (16/17) of the in frame DMD patients in our dataset had a variational hotspot between exons 45 and 60 (Table ). Consequently, it is suggested that the in-frame variational hotspot (exon 45–60) found in our study may represent a novel population-specific in-frame hotspot that has to be further studied in regional patient pools and validated via dystrophin protein levels. In keeping with a previous study by Yan-Li Ma et al. 2022 that revealed a predictive sensitivity of 86.8% for DMD, our cohort's predictive sensitivity for DMD based on the frame-shift theory was 85% (117/138) (Table ). It is interesting to note that early gross motor development milestone delay is documented in the literature as a clinical feature of DMD but not BMD . However, it is noteworthy that the gross motor development milestone delay, when taken alone has a limited ability to predict DMD, particularly in situations when the disease has in-frame variations . Yan-Li Ma et al. 2022 reported that, the reading-frame rule combined with the walking alone milestone significantly improved the early diagnosis rate of DMD, particularly the cases with in-frame variations, with a diagnostic coincidence rate increased to 93.49%, significantly higher than that predicted by reading-frame rule alone ( P = 0.05). In this context, 15/17 (88%) of in-frame DMD cases in our study showed global development delay, of which, language delay accounting for 11/15 (73%) and motor development delay accounting for 13/15 (87%) of these cases, respectively (Table ). It's interesting to note that none of the BMD patients in our dataset exhibit a generalized developmental delay (Table ). Our findings thus provide more evidence in favor of the idea that the reading-frame rule should be combined with both language delay and motor development delay to increase the prediction sensitivity of DMD, particularly in situations when in-frame variations are present. The utilization of MLPA is presently regarded as a labor-efficient primary method for detecting deletions and duplications of single or multiple exons in the DMD gene, as approximately 70% of dystrophinopathy patients exhibit such genetic alterations . Despite not being the primary molecular diagnostic method in developed nations, mPCR remains a viable and economical option for detecting deletions, and is, therefore, utilized in many laboratories situated in countries with limited resources . In our study as represented in Table and Supplementary Table 01, mPCR could provide molecular diagnosis for 55% (131/236) of the patients, of which 76% (100/136) of the patients the exact deletion boundaries accurately detected by mPCR. In line with our findings, studies conducted in South India and North India identified (103/150) 68% and (161/217) 74%, respectively, as the mutation detection Percentage by mPCR. Intriguingly in our study 95% (131/138) of the patients with deletion mutations could be diagnosed by mPCR. In line with our findings, Nouri et al., identified a deletion detection rate of 95% for mPCR in an Iranian population. On average, the cost of MLPA is estimated to be five times higher than that of mPCR. The utilization of MLPA as the principal screening technique within the framework of a developing nation would involve considerable costs. Hence, the proposed approach of employing mPCR as the primary step, followed by MLPA, is a prudent and precise method to efficiently proceed with the genetic diagnosis of DMD in settings with limited resources as previously described by Murugan et al. for South India . Using MLPA diagnostics (Table ), we were able to identify 82/138 (60%) patients as amenable to available exon skipping therapies. Interestingly, the majority of our patients were eligible for exon 51 skipping (30/82), followed by exon 53 skipping (19/82) and exon 45 skipping (19/82), which is consistent with data reported in the South Asian region, including North India , Tamil Nadu , and Pakistan. The authors extensively discussed the identification of DMD patients who can benefit from exon skipping therapies in a previous study by Wijekoon et al. in 2023. Therefore, this paper does not extensively discuss this topic . Table presents our findings of newly identified DMD variants in our study population, which have not been previously documented in the Leiden Muscular Dystrophy Gene Variant Database. According to reports, the introduction of genetic material from distinct geographical populations has the potential to augment genetic diversity and potentially engender novel genotypic configurations within populations that are not isolated or indigenous . The island of Sri Lanka, situated at the southernmost tip of South Asia and along the proposed Southern migration route, has been inhabited by diverse ethnic groups such as Portuguese, Dutch, British, and Arabs. This presents an interesting opportunity to gain a distinct perspective on the initial settlement of the subcontinent . In this context, we can hypothesize that the genetic admixtures that have taken place in Sri Lanka may have contributed to the emergence of the novel DMD variants that have been detected in our population. Considering there have been reports of phenotypic differences in individuals carrying the same variation , healthcare practitioners should be more cautious when interpreting genomic results in the clinical environment for patients with similar variations and familial cases. Since distinct dystrophin-expressing tissues and cells may behave differently to specific defective dystrophins, such diversity is more common in BMD patients . Furthermore, it is important to highlight that age can be confounded with phenotypic diversity when comparing patients with the same variation or familial cases. Therefore, it is advisable to take into account clinical batteries that are adjusted for age, such as the Weschler Intelligence Scale (WISC-IV). In this regard have conducted a comprehensive analysis discussing the relationship between serum proteomics profile, cognitive assessment using WISC scores, and DMD gene mutations. In this scenario, we have siblings in our patient cohort who have the same variation (out-of-frame deletion at exon 1–42), and whose Full-Scale IQ (FSIQ) as measured by the WISC-IV is 85 and 94, respectively. Additionally, one sibling with the identical variation was reported to have a motor development delay and scored FSIQ-85, but the other sibling did not (Table ). Furthermore, the WISC-IV scores of another two distinct patients with identical variation (out-of-frame deletion at exon 45–52) were 83 and 67 for FSIQ, respectively. It's interesting that, despite the fact that this variation (deletion at exon 45–52) is likely to impact how brain dystrophin isoform Dp140 is expressed, only one patient outperformed the other on the WISC, indicating an intellectual deficit. These will serve as evidence of how complex it is to interpret genetic data in a clinical setting. BMD patients with DMD variants in exon 1–8 and exon 41–45 impacting the Actin Binding Domain (ABD) and R16/R17 nNOS-binding domains are said to have a more severe presentation of BMD . We were unable to thoroughly evaluate this claimed connection due to the accumulation of DMD variants exclusively from exon 45–49 in our BMD cohort. Additionally, there have been reports on the comorbidity of Moyamoya disease , Frontometaphyseal Dysplasia , and Rippling muscle disease with DMD. These findings highlight the intricacy of assessing the phenotypes of patients with dystrophinopathy in a clinical setting and underscore the need for the implementation of whole exome sequencing (WES) in the evaluation of dystrophinopathy patients exhibiting complex phenotypes and negative results on MLPA testing. Accordingly as summarized in Table , 58% (136/236) of the cases in our sample were due to deletions as analyzed by MLPA [DMD- 90% (125/138) and BMD- 100% (11/11)], and 6% (13/236) of the cases (all DMD) were due to duplications. The DMD gene has a higher degree of allelic heterogeneity compared to many genes, due to the spontaneous mutation rate and large size, with 79 exons spanning 2.2 Mb, and hot spots for deletion mutations. One or more exons are deleted in 60–65% of DMD patients and 85% of BMD patients, respectively . Data from the literature demonstrate that deletion and duplication percentages vary across various populations. When deletion percentages for various ethnicities are taken into account (Fig. ), East Asians; Japan (61%) , Taiwan (36%) , China (58% and 71%) and Korea (46% and 72%) , demonstrate a trend of lower deletion percentages in the DMD gene compared to South Asians (p = 0.06); Sri Lanka (90%), pan India (73%-91%) , , and Pakistan (87%) and European countries; Netherlands (63%) , Italy (65%) , Spain (71%) , Hungary (67%) , Poland (61%) , Russia 49% and France (67%) . However, Algeria; a Northern African county has a deletion percentage of 77% . In contrary, a study by Selvaciti et al. on an overall group of 258 patients from Eastern European countries (Bosnia, Bulgaria, Croatia, Hungary, Lithuania, Poland, Rumania, Serbia, Ukraine and Cyprus) identified a lower deletion percentage (27%) (Fig. ). Elhavary et al. proposed that during Ancient Islamic times, Muslim immigration from the Levant and Africa, coupled with intermarriage, contributed to the reinforcement of gene flow of the DMD gene among the Saudi population. Turkey; a Middle Eastern country crossroads between Europe and Asia, has been found to exhibit a complex genetic makeup resulting from admixture with populations from the Balkans, Caucasus, Middle East, and Europe. Notably, genetic analyses have revealed a closer genetic affinity of the Turkish population to Europeans . In this context, findings of two studies conducted by Cavdarli et al. and Ulgenalp et al. reported a deletion percentage of 92.4% and 63.7% in the Turkish population, respectively . These percentages were observed to be higher than the deletion percentage reported in the Saudi population, which was 46.3%. According to Elhavary et al. 2019, the higher percentage of deletions observed in Turkey compared to Saudi Arabia may be attributed to the admixture of Turkish populations with those of European descent . However, a study conducted by Toksoy et al. 2019 on Turkish patients with DMD report a deletion percentage of 48.8%, which is similar to the percentage observed in the Saudi community . Therefore, it is crucial to conduct further investigations on the hypothesis that European admixture results in higher deletion percentages. This can be achieved by studying larger patient cohorts, particularly those from South Asia (populations from the Indian Subcontinent) who were long-term subjects of Portuguese, Dutch, and British colonialism may provide unique resources to investigate. However, it is important to note that the existing literature presents differing conclusions regarding the European admixture with the populations from Indian Subcontinent. According to the findings of Reich et al. in 2009, it was determined that the Ancestral North Indians (ANI) exhibit genetic similarities with individuals from the Middle East, Central Asia, and Europe . Conversely, the "Ancestral South Indians" (ASI) were found to be genetically distinct from ANI. According to a study conducted by Neus Font-Porterias et al. in 2019, it was determined that the potential ancestral group of the proto-Roma, which is the largest transnational ethnic minority in Europe, can be traced back to a Punjabi group with minimal levels of West Eurasian ancestry . Furthermore, the same study has revealed the presence of a multifaceted West Eurasian element, comprising approximately 65% of the Roma population. This finding can be attributed to the intermingling that transpired between non-proto-Roma groups and the Roma community during the period spanning from 1270 to 1580. Intriguingly, a recent study conducted by Perera et al. 2021 examined the four major ethnic groups in Sri Lanka, namely Sinhalese, Sri Lankan Tamils, Indian Tamils, and Moors. The study found that all Sri Lankan ethnicities, with the exception of Indian Tamils, exhibited a close clustering with populations from the Indian Bhil tribe, Bangladesh, and Europe. This clustering pattern suggests a shared Indo-Aryan ancestry among these populations . Although consanguineous marriages are infrequent in Western societies, when Middle Eastern populations are considered; Iran (consanguinity: 50.7% in urban, 86.2% in rural) Riyadh from Saudi Arabia (Consanguinity 80.6% in Samtah and 62.8% in Riyadh) shows significant deletion percentages in DMD gene reported as 80% in Iran and 78% in Riyadh correspondingly. Elhavary et al. suggest that the observed higher consanguinity rate in Riyadh may have a link with the increased DMD deletion rates (77.8%) observed in Riyadh . Moreover, Algeria, a country in Northern Africa, has reported a higher rate of consanguinity (36.6%) . Selvaciti et al. reported a noteworthy finding that Algerian patients exhibit a higher percentage (77%) of DMD deletions compared to Eastern Europeans , whose mutations are primarily nonsense (31%) followed by deletions (29%). Notably consanguinity account for 20–50% of marriages in various parts of Africa and Asia, particularly in South Asia . In Pakistan and the southern portions of India, consanguineous marriages account for around 70% and 23% of all marriages, respectively . In this context, it is possible to infer that consanguinity may have played a role in the higher rates of DMD gene deletions observed in South Asians, Africans, and Middle Eastern countries. However, it is important to note that while consanguineous unions can result in a higher occurrence of autosomal recessive disorders, there is ample evidence to suggest that consanguinity does not elevate the risk for autosomal dominant conditions or X-linked recessive conditions . The available scientific evidence does not provide a strong basis for linking the higher rates of DMD gene deletions observed in South Asians, Africans, and Middle Eastern countries to consanguinity. In this context, the increased frequencies of deletions in the DMD gene may be attributed to various mechanisms that are involved in the formation of genomic rearrangements . Non-allelic homologous recombination (NAHR) is an important mechanism that can explain the frequencies of deletions and duplications . The NAHR mechanism, specifically the crosslinking of Alu repeats, has been implicated as a causal factor in deletions affecting various genes, including the DMD gene . However, it is important to note that if NAHR caused both deletions and duplications, it would anticipate comparable frequencies of deletions and duplications for each intron. However, this is not observed in the case of DMD . Thereby it is reported that nonrecurrent events typically do not arise through NAHR. Instead, nonhomologous end joining (NHEJ), which involves the ligation of double-strand breaks, is often suggested as a mechanism for nonrecurrent intragenic deletions and duplications . Several studies have provided supporting evidence for this in DMD through the sequencing of deletion breakpoint junctions in the DMD gene. Moreover, it has also been proposed that duplications may occur at various stages of the cellular cycle. Similar to point mutations, deletions are primarily inherited from the maternal lineage, whereas duplications are passed down through the paternal germ line . When the variation hotspots of our cohort were compared to the information available in the literature (Fig. ), distinct distal hotspot has been identified for Netherlands, which ranged from (exon 8–61). This observed variation may have been influenced by clustering, which is connected to locally constrained gene flow across significant Dutch rivers and to country-wide ancestry gradients from neighboring territories . For duplications ranging from (exon 50–79) and (exon 45–50), respectively, Iran from the Middle East, and, the African region stood out as unique hotspots. The observed uniqueness in duplication hotspots in Iran and Africa may be due to the high consanguinity rates associated with these populations. The higher deletion frequency in the distal hotspot region (Fig. ) and low duplication frequency observed in South Asians may provide insight into the feasibility of implementing conventional molecular diagnostic approaches such as mPCR, which can easily detect about 90% of the deletions in the hotspot region. Thus, it is proposed to develop tailored molecular diagnostic algorithms that are regional and population-specific and easily implemented in low resource settings. . It is notable that diagnostic delays persist in traditionally disadvantaged groups, such as patients from developing countries and with lower socioeconomic status, because access to subspecialty care and genetic testing is difficult for patients from developing countries , including Sri Lanka . (Fig. ). It is noteworthy that the average age of patients receiving their first clinical evaluation in our cohort was four years (Table ), the same as the age of onset of symptoms. This is in contrast to data reported in India ; [Age at symptom onset—(3.7 ± 1.9 years), Age at first clinical evaluation – (8.1 ± 2.5 years)], China ; [Age at symptom onset- (3 years), Age at first clinical evaluation- (6–8 years)] Saudi Arabia; [Age at symptom onset- (1–3 years), Age at first clinical evaluation- (9–12 years)]. Despite the fact that the average age of symptom onset in our patient cohort was 4 years (Table ), when the first clinical examination was performed, only 21% of patients (29/138) were referred for molecular diagnostics before the age of 5 years. In our cohort, this has increased the average age of referral to molecular diagnostics to 7.8 years, indicating a delay in receiving an accurate diagnosis. The observed delay may be attributable to the following: (1) The time required for referral to a specialist (Neurologist/ Pediatric Neurologist) and the difficulty in obtaining access to crucial diagnostic tests that must be performed at a government tertiary care hospital, (2) Lack of awareness among clinicians regarding the significance of molecular diagnostics as the gold standard for DMD confirmation; and (3) Neurogenetic testing is almost nonexistent in Sri Lankan government hospitals and only available at exorbitant costs in a few private sector centers. To the best of our knowledge, the neuromolecular diagnostic service established by the corresponding author at a government institute is the first of its kind to offer free molecular diagnostics for certain neuromuscular and neurodegenerative diseases. Moreover, CPK screening remains as the initial approach in testing for muscular dystrophies in resource limited settings where molecular diagnosis is not frequently available at a reduced cost. In our study, the majority of patients in our patient cohort underwent CPK evaluation during their initial visit to the healthcare professional (Table ). This evaluation prompted their enrollment in the molecular diagnostic program, which provides genetic confirmation at no cost. The authors have previously addressed the relationship between age, mutation pattern and CPK levels in this patient cohort in a comprehensive manner, as documented in Wijekoon et al., 2023 . Therefore, further discussion on this topic will not be included in the present paper. Thus CPK screening may be suggested in primary care as an approach in suspected early diagnosis of dystrophinopathy in resource limited settings which should be followed by a confirmatory molecular diagnostic approach, which will further reduce the diagnostic delay. Despite being one of the most comprehensive studies conducted to date on dystrophinopathies in Sri Lanka, we acknowledge the following limitations in our study. N = 87 patients were negative for MLPA analysis; however, due to limited infrastructure and financial constraints, we could not perform genome sequencing for the MLPA negative cases, which we open up for future international collaboration. In addition, carrier detection for the mothers and female siblings of the probands was only conducted in a limited number of cases at the request of the consultant neurologist/pediatric neurologist due to the lack of genetic counselling services within the Sri Lankan health care system. It is important to remember that false positive results in MLPA can occur due to failed primer/probe binding, especially in single exon deletions. In this regard, according to Kim et al. 2016, MLPA has been found to have a false positive rate of approximately 15% in cases of large gene rearrangements that affect a single exon . In addition, Buitrago and colleagues documented a 40% rate of false-positive results among individuals who were identified as having mutations affecting single exons through MLPA testing . Hence, it is recommended that medical professionals in the clinic take into account the variation detection procedure utilized before drawing any conclusions about a single exon deletion . The European Molecular Quality Genetics Network's (EMQN) best practice recommendations for genetic testing in dystrophinopathies include reconfirming single exon deletions discovered in MLPA by PCR . In this analysis, we found n = 28 (20%) DMD patients with single exon deletions, with exon 44 and exon 51 being the most commonly deleted single exons (Table ). Following EMQN protocols, all single exon deletions were re-confirmed by Multiplex PCR before being reported to the clinic. To predict potential Duchenne or Becker effects of the deletion or duplication discovered using DNA data, the "reading frame rule" has gained popularity . However, since deviations to the reading frame rules are frequent, the predictive sensitivity of the "reading frame rule" has been questioned. It can be difficult for medical practitioners to judge whether to classify a patient as having Duchenne or mild to moderate Becker by merely interpreting the reading frame, since "leaky" variations that are initially out-of-frame are found to produce low quantities of dystrophin, which will reduce the disease severity by 3–4% . As presented in Table , in our population, a total of 17 out of 149 mutations, representing 11.4%, were found to be non-compliant with the reading frame rule. The comparative analysis of our value revealed that it is higher than the percentages reported in various regions including Tamil Nadu, India (3.9%) , Bangalore, India (8.4%), Saudi Arabia (5.6%) , France (4%) , Italy (5.4%) , Brazil (9.6%) , TREAT-NMD DMD Global database (7%) , UMD-DMD database (4%) , and Leiden database (9%) . However, our value is lower than the values reported in China (13.6%) and Spain (15%) . Mateu et al. report that deletions exhibit a relatively low number of exceptions to the reading frame rule, whereas duplications and point mutations tend to have a greater probability of exceptions to the reading frame. In contrast, our cohort exhibited a reading frame exception rate of 8.7% for deletions and 2.6% for duplications. Nonetheless, the analysis of point mutations was not feasible in the present investigation, a constraint that we duly recognize. When in-frame variations are evaluated further, it is reported in the literature that in-frame variations encoded by exons 64–70, 2–10, and 32–35 are associated with a DMD phenotype, as in-frame variations bordering the aforementioned regions will not produce a functional dystrophin protein . Contrarily, 94% (16/17) of the in frame DMD patients in our dataset had a variational hotspot between exons 45 and 60 (Table ). Consequently, it is suggested that the in-frame variational hotspot (exon 45–60) found in our study may represent a novel population-specific in-frame hotspot that has to be further studied in regional patient pools and validated via dystrophin protein levels. In keeping with a previous study by Yan-Li Ma et al. 2022 that revealed a predictive sensitivity of 86.8% for DMD, our cohort's predictive sensitivity for DMD based on the frame-shift theory was 85% (117/138) (Table ). It is interesting to note that early gross motor development milestone delay is documented in the literature as a clinical feature of DMD but not BMD . However, it is noteworthy that the gross motor development milestone delay, when taken alone has a limited ability to predict DMD, particularly in situations when the disease has in-frame variations . Yan-Li Ma et al. 2022 reported that, the reading-frame rule combined with the walking alone milestone significantly improved the early diagnosis rate of DMD, particularly the cases with in-frame variations, with a diagnostic coincidence rate increased to 93.49%, significantly higher than that predicted by reading-frame rule alone ( P = 0.05). In this context, 15/17 (88%) of in-frame DMD cases in our study showed global development delay, of which, language delay accounting for 11/15 (73%) and motor development delay accounting for 13/15 (87%) of these cases, respectively (Table ). It's interesting to note that none of the BMD patients in our dataset exhibit a generalized developmental delay (Table ). Our findings thus provide more evidence in favor of the idea that the reading-frame rule should be combined with both language delay and motor development delay to increase the prediction sensitivity of DMD, particularly in situations when in-frame variations are present. The largest and most well-established DMD mutation database in Sri Lanka demonstrates DMD gene deletions and duplications that are primarily concentrated in exons 45–55 and 2–20, respectively, which are consistent with the globally observed variation hotspots. Importantly, a unique, distinct mutation pattern of exon 45–60 was identified as a novel in-frame variation hotspot, which would contribute in personalized medicine to rational design of mutation-specific therapies. Furthermore, we have observed intriguing disparities in deletion and duplication frequencies when comparing our data to other Asian and Western populations.The utilization of mPCR as an initial molecular diagnostic method is considered highly feasible for countries with limited resources, owing to its 95% detection rate for deletions as identified in our study. Thereby, the authors propose an initial screening method using mPCR, then an assessment of cases that test negative for mPCR and have ambiguous mutation borders using MLPA. Our findings may have important implications in the early identification of DMD with limited resources in Sri Lanka and to develop tailored molecular diagnostic algorithms that are regional and population-specific and easily implemented in resource limited settings. Additional file 1: Table S1. Additional mutations and deletion borders identified by MLPA over Multiplex PCR.
Central lung adenocarcinoma in a young male mimicking pneumonia with nonrecurrent polyserous effusions of negative cytology: A case report
820922fa-5d01-474b-84d6-a218e73a9167
11296416
Pathology[mh]
Lung cancer is the most common cancer leading to death worldwide. It is generally divided into non–small-cell lung cancer (NSCLC), 80%, and small cell lung cancer. Approximately 70% of NSCLC, like adenocarcinoma, is diagnosed in the late or metastatic stage. Most patients diagnosed with lung cancer are 65 or older, but 3.5% of them are 45 years old or younger. Computed tomography (CT) can reveal a variety of lung adenocarcinomas, such as a single nodule or mass, a thin-walled cystic lesion, localized or widespread parenchymal consolidation, or multifocal lesions. Due to the difficulty in differentiating lung adenocarcinoma from pneumonia when it presents as parenchymal consolidation, diagnosis of the condition is frequently delayed. Polyserous effusions are defined as fluid accumulation in 2 or more serious cavities, which is quite rare in adenocarcinoma patients and frequently misdiagnosed with other etiologies. However, any polyserous effusions at a young age must always be suspicious for malignancy. Ten percent of cancer patients get pericardial effusion, with lung cancer being the most common cause. However, pleural effusion and metastatic ascites occur in 7% to 15% and 2.7% to 16% of lung cancer patients, respectively. In our article, we report a case of a 38-year-old male who presented as a complicated pneumonia case with nonrecurrent polyserous effusions and negative pleural and pericardial cytology, which was attributed to central adenocarcinoma of the lung. A 38-year-old heavy smoker male with no pertinent medical history presented to the internal clinic with colicky right-sided and epigastric abdominal pain of 3-day duration that increased with eating and was not relieved with H2 blockers but partially relieved by Ibuprofen, associated with abdominal distension, sweating, nausea, and vomiting of clear fluid without blood. He also reported generalized fatigue, feverish sensation, anorexia with recent weight loss. Symptoms became progressive and developed dyspnea and cough without hemoptysis. The patient denied any history of chest pain, palpitations, cyanosis, leg swelling, dizziness, or other symptoms. He also denied chest, abdominal trauma and recent travel. He had a family history of lung cancer in his paternal uncle at the age of 35 years, in addition to a late-onset lung cancer in his grandmother. His aunt was diagnosed with breast cancer at the age of 45 years, too. The physical examination was unremarkable except for diffuse tenderness and guarding of the abdomen, more prominent in the right upper and lower quadrants, accompanied by rebound tenderness. Chest X-ray showed retrocardiac consolidation of the right lower lobe of lung, enlargement of the cardiac outline suggesting pericardial effusion and moderate right-sided pleural effusion (Fig. ). Chest and abdominal CT with intravenous contrast revealed ascites, right lower lobe lesion suggesting consolidation, associated pleural and pericardial effusion (Fig. ). Subsequently, echocardiography detected a large pericardial effusion with significant right ventricular compression and normal left ventricular size and function. Abdominocentesis and biochemistry showed a yellow, turbid fluid with increased lactate dehydrogenase, normal glucose, total protein and albumin. Pleural tapping and biochemistry analysis revealed an exudative fluid. Pericardiocentesis showed a bloody, turbid aspirate. The cultures of the 3 fluid samples showed no evidence of bacterial or fungal growth. Peritoneal, pleural, and pericardial fluid analysis are shown in Table . Samples of pleural and peritoneal fluid and the pericardial window were sent for cytology and pathology. Considering the diagnosis of pneumonia, he was given colchicine, ibuprofen, metronidazole, and levofloxacin. A few days later, the patient’s symptoms improved dramatically. The cytology of the peritoneal fluid was interpreted as showing macrophages, neutrophils, and occasional lymphocytes with reactive mesothelial cells and being negative for malignancy. The cytology of the right pleural fluid revealed few reactive mesothelial cells, many lymphocytes, few neutrophils, and the absence of cancer cells. A pericardial window under echocardiography-guidance illustrated mild chronic inflammation with no evidence of malignancy. A follow-up CT scan revealed resolution of almost all of the pleural effusion, with significant reduction of pericardial effusion but persistence of the spiculated soft tissue enhancing lesion measuring about 2.1 cm in the right lower lobe (Fig. ) associated with hilar and supraclavicular lymphadenopathy (Fig. ). For further evaluation, a whole-body positron emission tomography scan was performed that demonstrated a hypermetabolic potentially malignant right pulmonary nodule in the posterior basal segment of the lower lobe (Fig. ), bilateral prominent-sized supraclavicular lymph nodes, the left being larger (Fig. ), and prominent right mediastinal and hilar lymph nodes (Fig. ). Magnetic resonance imaging of the brain was performed with no evidence of metastatic disease. An excisional biopsy of the supraclavicular lymph node confirmed Metastatic moderately differentiated adenocarcinoma of lung origin (Fig. ). Immunohistochemical stains were positive for thyroid transcription factor (TTF-1) and cytokeratin-7 (CK7), which is consistent with primary pulmonary adenocarcinoma (Fig. A, B). A puncture biopsy of the right lung mass was performed with CT guidance, and histopathology confirmed it as invasive, prominent right mediastinal and hilar lymph nodes (Fig. ). Thus, he was diagnosed with stage IIIC lung adenocarcinoma. Molecular testing revealed that the tumor was negative for epidermal growth factor receptor mutations, anaplastic lymphoma kinase gene, cROS oncogene 1 (ROS1), and programmed cell death ligand 1. Accordingly, the patient was started on chemoradiation therapy. In our case, lung adenocarcinoma presented a case of pneumonia with polyserous effusions that responded to treatment and almost completely resolved without recurrence. In contrast to the literature, a high likelihood of symptomatic, ipsilateral pleural fluid recurrence within 100 days of the initial thoracentesis exists in patients with advanced metastatic NSCLC and large unilateral pleural effusion. According to these facts, this presentation is unique, and pneumonia with polyserous effusions should always be suspicious for malignancy, especially at a young age. Moreover, the association between central lung adenocarcinoma and pleural, pericardial, and ascites effusions has been well-documented in the literature. Additionally, compared to our patient, who presented with advanced lung cancer at the young age of 38 years, only 0.9% of patients in the 35 to 44 age group are diagnosed with lung and bronchus cancer. Lung and bronchial cancers are most often diagnosed in adults aged 65 to 74 years. Tissue histology provides a conclusive diagnosis of lung carcinoma. According to studies, pleural fluid cytology performed following the initial thoracocentesis had a 60% sensitivity for detecting lung cancer. Repeating this procedure causes it to rise to 75%. As a result, it is not appropriate to diagnose lung cancer merely based on the pleural cytology or biopsy’s lack of malignant cells. In addition, pericardial fluid cytology had a sensitivity of 92.1% in diagnosing cancer, whereas pericardial biopsy had a sensitivity of 55.3%. In this case, both the pleural and pericardial effusion cytology and pericardial biopsy were negative for malignancy. However, malignancy was not ruled out because pleural biopsy results in patients with lung cancer can be negative. Pleural cytology is negative in two-thirds of lung tumors. Hence, additional investigations should be performed, including thoracotomy, medical thoracoscopy, video-assisted thoracic surgery, and image-guided cutting needle biopsy. Video-assisted thoracic surgery was excluded because of advanced disease, and since thoracotomy is very invasive, CT-guided biopsy and bronchoscopy were the only remaining options. Here is a challenge to be mentioned: the lung lesion was located centrally, so it was not easily accessible by CT-guided biopsy or by bronchoscopy. However, a hard decision to proceed with a CT-guided biopsy was made since it was the only available option, provided that a thoracic surgeon was on the scene in case any bleeding or complications happened. Even though this was very challenging, the CT-guided biopsy was done without any complications. There is some debate about the stage of this cancer; oncologists view it as stage IIIC because there is no proof about the malignant origin of the polyserous effusions. On the other hand, the thoracic surgeon’s point of view is stage IV because there is no explanation for these polyserous effusions at such a young age, which were not attributed to an inflammatory or infectious origin. Especially when ascites, pleural and pericardial effusions are present in lung cancer patients, it means the illness is advanced and the median survival time is short. Consequently, the patient was treated as a case of stage IIIC and started with chemoradiation. In our study, we acknowledge several limitations that may have affected the scope and findings of our research. Firstly, the retrospective nature of the case report limits the ability to draw definitive conclusions about the association between polyserous effusions and lung adenocarcinoma. Additionally, the small sample size of our study, limited to a single case, restricts the generalizability of our findings to a broader population. Furthermore, the diagnostic challenges encountered in this case, including the negative cytology results and the difficulty in accessing the centrally located lung lesion for biopsy, highlight the complexities and limitations inherent in diagnosing and staging lung adenocarcinoma with polyserous effusions. These limitations emphasize the need for further research and larger studies to better understand the clinical implications and management of lung adenocarcinoma presenting with polyserous effusions. This case described a patient in his fourth decade of life presenting with complicated pneumonia and nonrecurrent polyserous effusions with negative cytology, which is surprisingly uncommon. Although pleural fluid cytology and biopsy are still used in most algorithms for detecting lung cancer, their poor sensitivity and high incidence of false-negative findings should worry clinicians. Clinicians should be aware of the unusual presentation of lung adenocarcinoma since it frequently results in misdiagnoses of infectious and inflammatory lung disorders, making a thorough workup is essential. Conceptualization: Ayat A. Aljuba, Balqis Mustafa Shawer, Roa’a M. Aljuneidi, Safa Halman, Mohammed Abdulrazzak. Data curation: Ayat A. Aljuba, Balqis Mustafa Shawer, Roa’a M. Aljuneidi, Safa Halman, Afnan W.M. Jobran, Orwa Al Fallah, Nidal E.M. Al Jebrini, Izzeddin A. Bakri, Yousef Abu Asbeh. Project administration: Ayat A. Aljuba. Writing—original draft: Ayat A. Aljuba, Balqis Mustafa Shawer, Roa’a M. Aljuneidi, Safa Halman, Mohammed Abdulrazzak, Orwa Al Fallah. Writing—review & editing: Ayat A. Aljuba, Balqis Mustafa Shawer, Roa’a M. Aljuneidi, Safa Halman, Afnan W.M. Jobran, Mohammed Abdulrazzak, Orwa Al Fallah, Nidal E.M. Al Jebrini, Izzeddin A. Bakri, Yousef Abu Asbeh. Resources: Balqis Mustafa Shawer, Roa’a M. Aljuneidi, Afnan W.M. Jobran, Mohammed Abdulrazzak, Yousef Abu Asbeh. Validation: Safa Halman, Afnan W.M. Jobran, Mohammed Abdulrazzak, Orwa Al Fallah, Izzeddin A. Bakri, Yousef Abu Asbeh. Supervision: Afnan W.M. Jobran, Yousef Abu Asbeh. Investigation: Nidal E.M. Al Jebrini, Izzeddin A. Bakri.
Integrated Proteomics and Machine Learning Approach Reveals PYCR1 as a Novel Biomarker to Predict Prognosis of Sinonasal Squamous Cell Carcinoma
f698d327-a04c-42cc-9bd6-42f26119f546
11675701
Biochemistry[mh]
Nasal cavity cancers represent 5% of all head and neck cancers and have a histological variety and complexity, which is a challenge for pathologists. In general, patients usually have an asymptomatic appearance until the tumor grows to a large size. Therefore, most of the patients were found at an advanced stage . The most common type of nasal cancer is sinonasal squamous cell carcinoma (SNSCC), which is approximately 50% of all nasal cancer. SNSCC mainly originates in the nasal cavity and maxillary sinus. The incidence of SNSCC is <1 case per 100,000 population, which is found in men more than women. The majority of patients are aged 50–60 years old and have a 53.1% 5-year survival rate . Recently, most researchers have focused on genetic mutation and environmental factors that influence SNSCC. The environment-associated risk factors of SNSCC are woodworkers, occupational exposure to chemical substances, and the leather industry . These risk factors are possibly involved in the chronic inflammation pathway . EGFR and KRAS mutation have been linked to be associated with SNSCC development . Nevertheless, little is currently known about candidate protein biomarkers for SNSCC. Therefore, exploring a novel biomarker is critical for the diagnosis and prognostic prediction of SNSCC patients. In the last decade, proteomic analysis has become a promising tool for the study of tumor biology. The purpose of clinical proteomics studies is to identify diagnostic biomarkers, understand the molecular pathogenesis of cancers, identify drug targets, and personalize medicine . Recently, machine learning (ML) has enabled biologists to uncover the underlying biology of large-scale omics datasets and shown promise in improving diagnosis, and predicting risk of diseases based on various factors such as clinical information, biochemical testing, electrocardiograms, medical imaging, and biomarkers . These techniques can be used to support physicians and scientists in studying and classifying anatomic pathologies. In the context of SNSCC, a rare and aggressive head and neck cancer, identifying robust biomarkers is crucial for early diagnosis, personalized treatment, prognostic prediction, and patient outcome improvement. Therefore, we aimed to identify potential tumor-associated markers through proteomic analysis integrated with ML models. Our findings suggest that a combination of ML and proteomic data can be used to classify SNSCC and nasal polyps (NP) patients. Specifically, we observed for the first time that PYCR1, a gene involved in proline metabolism, exhibits promise as a tumor-associated marker and can be significantly used as a prognostic biomarker in SNSCC. 2.1. Label-Free Quantification of Nasal Polyps and Sinonasal Squamous Cell Carcinoma To analyze the proteome data of SNSCC by using a mass spectrometry (MS)-based label-free quantification method, we extracted and digested the whole protein from the formalin-fixed paraffin-embedded (FFPE) tissues of 16 NP and 14 SNSCC samples collected in Srinagarind Hospital, Khon Kean University ( A). The characterization of NP and SNSCC is shown in . There was a significant difference in age between the two groups. The digested peptide was injected into liquid chromatography–tandem mass spectrometry (LC–MS/MS) and quantified by DecyderMS. The total protein expression profile is shown in . To investigate the overall differences and similarities of the protein expression profiles between the NP and SNSCC groups, principal component analysis (PCA) was constructed by R programming version 4.3.3. The data from nasal tissues could be clustered distinctly into two groups, indicating that protein signatures can discriminate between NP and SNSCC ( B),which helped obtain differential expression proteins of SNSCC versus NP for further analysis. From the results, 831 significantly differentially expressed proteins were identified in SNSCC ( C). Among them, 199 and 632 were up- and down-regulated in the SNSCC, respectively. 2.2. Machine Learning for Biomarker Discovery In this study, we used support vector machine (SVM), logistic regression (LR), random forest (RF), and gradient boost (GB) classifiers to discover a protein that can be a potential tumor-associated marker for SNSCC patients. To evaluate the performance of these models, the dataset was divided into groups, each containing three technical replicates. Leave-one-group-out cross-validation (LOGO-CV) was performed on the entire dataset to assess overall performance and eliminate information leakage from three replicates. The average accuracy across the LOGO-CV was over 70%, indicating consistent and reliable performance in distinguishing SNSCC from NP . Additionally, the dataset was split into a training set (80%) and a validation set (20%) to evaluate model performance on unseen data. The prediction results on the validation set are presented in and . Based on the performance for prediction of four models, the RF model showed the best SNSCC prediction performance (accuracy: 94%, precision: 92%, sensitivity: 100%, and precision: 83%). Three models (RF, SVM, and LR) showed more than 70% accuracy, precision, sensitivity, and specificity, indicating they are robust and reliable for classifying SNSCC and NP cases. Although the GB model showed a sensitivity of 67%, it achieved a specificity and precision of 100%, indicating its high effectiveness in minimizing false positives. However, its lower sensitivity suggested a reduced ability to identify all true positive cases compared to the other models. To select the potential tumor-associated marker for SNSCC, the intersection of feature proteins from each model was used . We found that 17 proteins were common in all models, suggesting that there could be a potential tumor-associated marker for SNSCC . Therefore, 17 protein panels could potentially be used as a tumor marker for SNSCC. 2.3. Selection of Candidate Biomarkers According to common feature proteins, five criteria were used to select candidate tumor-associated markers for further validation experiments in SNSCC and NP tissues. These included a review of the literature, pan-cancer expression levels, gene expression level in The Cancer Genome Atlas Head-Neck Squamous Cell Carcinoma (TCGA-HNSC) dataset, stage plot, and survival rates using GEPIA2 database. Notably, the genes MYO1B and PYCR1 were significantly upregulated in various cancer types, especially the HNSC dataset, when compared to normal tissues ( A,B,D,E). Based on the expression levels of MYO1B, we divided the cancer cases into high-expression and low-expression groups and investigated the survival rate of patients with the HNSC dataset. High expression of MYO1B was associated with a poor prognosis for overall survival for HNSC ( p = 0.0097; C). Stage plotting shows that the expression of PYCR1 genes increased with the continuous progression of HNSC patients ( p = 0.0267; F). Thus, we selected MYO1B and PYCR1 for further validation in SNSCC and NP tissues. 2.4. PYCR1 Might Serve as a Tumor-Associated Biomarker for SNSCC To investigate the mRNA expression of PYCR1 in SNSCC and NP tissues, total RNA was extracted from 115 FFPE samples of SNSCC (N = 63) and NP (N = 53). The relative expression levels were examined using RT-qPCR. The expression of PYCR1 mRNA was significantly upregulated in SNSCC compared with NP tissues (two-fold change, p < 0.0001; A). The overexpression of PYCR1 was further supported by proteomic data, where PYCR1 was found to be significantly upregulated (unadjusted p = 0.0099; B). After applying the Benjamini–Hochberg (BH) correction for multiple comparisons, the adjusted p -value (q-value) increased to 0.062, which exceeded the commonly used significance threshold of 0.05. Despite this, the consistent upregulation of PYCR1 in SNSCC was validated through qRT-PCR, reinforcing its biological relevance as a potential biomarker for SNSCC. In contrast, the expression of MYO1B was not significantly upregulated in SNSCC tissues compared to NP tissues ( A). 2.5. Reviewing of PYCR1 Expression in Different Tumor Tissues and Its Association with Clinicopathological Characteristics in SNSCC Patients Based on pan-cancer analysis, PYCR1 was consistently expressed across various cancer types. Further review of work in the literature examined the role of PYCR1 in different types of cancer. Higher PYCR1 expression was associated with various clinicopathological features, such as metastasis and advanced tumor stage . The biological consequences indicated that PYCR1 affected key hallmarks of cancer, including cell proliferation, anti-apoptosis, and metastasis. Moreover, higher PYCR1 was associated with a worse prognosis. These findings suggested that PYCR1 may serve as a potential oncogene, prognostic biomarker, and therapeutic target for various cancers. As previously indicated, overexpression of PYCR1 was associated with various clinicopathological features and prognostic biomarkers. Therefore, we then examined whether PYCR1 expression was associated with clinicopathological features in SNSCC patients. We divided SNSCC patients into a high PYCR1 expression group (n = 27) and a low PYCR1 expression group (n = 27) according to the median relative expression of SNSCC. However, PYCR1 expression was not found to be associated with age, gender, cell differentiation, sub-type, and invasion . To evaluate the prognostic significance of PYCR1 in SNSCC, the R software version 4.3.3 package maxstat was used to determine the optimal cutoff. A Kaplan–Meier analysis with a log-rank test was conducted based on this optimal cutoff, which corresponded to a 4.45-fold change in PYCR1 expression. The results revealed that high PYCR1 expression was significantly associated with poor overall survival compared to low expression (low expression vs. high expression = 27.30 months vs. 10.74 months, HR = 2.40, p = 0.0137; ). To analyze the proteome data of SNSCC by using a mass spectrometry (MS)-based label-free quantification method, we extracted and digested the whole protein from the formalin-fixed paraffin-embedded (FFPE) tissues of 16 NP and 14 SNSCC samples collected in Srinagarind Hospital, Khon Kean University ( A). The characterization of NP and SNSCC is shown in . There was a significant difference in age between the two groups. The digested peptide was injected into liquid chromatography–tandem mass spectrometry (LC–MS/MS) and quantified by DecyderMS. The total protein expression profile is shown in . To investigate the overall differences and similarities of the protein expression profiles between the NP and SNSCC groups, principal component analysis (PCA) was constructed by R programming version 4.3.3. The data from nasal tissues could be clustered distinctly into two groups, indicating that protein signatures can discriminate between NP and SNSCC ( B),which helped obtain differential expression proteins of SNSCC versus NP for further analysis. From the results, 831 significantly differentially expressed proteins were identified in SNSCC ( C). Among them, 199 and 632 were up- and down-regulated in the SNSCC, respectively. In this study, we used support vector machine (SVM), logistic regression (LR), random forest (RF), and gradient boost (GB) classifiers to discover a protein that can be a potential tumor-associated marker for SNSCC patients. To evaluate the performance of these models, the dataset was divided into groups, each containing three technical replicates. Leave-one-group-out cross-validation (LOGO-CV) was performed on the entire dataset to assess overall performance and eliminate information leakage from three replicates. The average accuracy across the LOGO-CV was over 70%, indicating consistent and reliable performance in distinguishing SNSCC from NP . Additionally, the dataset was split into a training set (80%) and a validation set (20%) to evaluate model performance on unseen data. The prediction results on the validation set are presented in and . Based on the performance for prediction of four models, the RF model showed the best SNSCC prediction performance (accuracy: 94%, precision: 92%, sensitivity: 100%, and precision: 83%). Three models (RF, SVM, and LR) showed more than 70% accuracy, precision, sensitivity, and specificity, indicating they are robust and reliable for classifying SNSCC and NP cases. Although the GB model showed a sensitivity of 67%, it achieved a specificity and precision of 100%, indicating its high effectiveness in minimizing false positives. However, its lower sensitivity suggested a reduced ability to identify all true positive cases compared to the other models. To select the potential tumor-associated marker for SNSCC, the intersection of feature proteins from each model was used . We found that 17 proteins were common in all models, suggesting that there could be a potential tumor-associated marker for SNSCC . Therefore, 17 protein panels could potentially be used as a tumor marker for SNSCC. According to common feature proteins, five criteria were used to select candidate tumor-associated markers for further validation experiments in SNSCC and NP tissues. These included a review of the literature, pan-cancer expression levels, gene expression level in The Cancer Genome Atlas Head-Neck Squamous Cell Carcinoma (TCGA-HNSC) dataset, stage plot, and survival rates using GEPIA2 database. Notably, the genes MYO1B and PYCR1 were significantly upregulated in various cancer types, especially the HNSC dataset, when compared to normal tissues ( A,B,D,E). Based on the expression levels of MYO1B, we divided the cancer cases into high-expression and low-expression groups and investigated the survival rate of patients with the HNSC dataset. High expression of MYO1B was associated with a poor prognosis for overall survival for HNSC ( p = 0.0097; C). Stage plotting shows that the expression of PYCR1 genes increased with the continuous progression of HNSC patients ( p = 0.0267; F). Thus, we selected MYO1B and PYCR1 for further validation in SNSCC and NP tissues. To investigate the mRNA expression of PYCR1 in SNSCC and NP tissues, total RNA was extracted from 115 FFPE samples of SNSCC (N = 63) and NP (N = 53). The relative expression levels were examined using RT-qPCR. The expression of PYCR1 mRNA was significantly upregulated in SNSCC compared with NP tissues (two-fold change, p < 0.0001; A). The overexpression of PYCR1 was further supported by proteomic data, where PYCR1 was found to be significantly upregulated (unadjusted p = 0.0099; B). After applying the Benjamini–Hochberg (BH) correction for multiple comparisons, the adjusted p -value (q-value) increased to 0.062, which exceeded the commonly used significance threshold of 0.05. Despite this, the consistent upregulation of PYCR1 in SNSCC was validated through qRT-PCR, reinforcing its biological relevance as a potential biomarker for SNSCC. In contrast, the expression of MYO1B was not significantly upregulated in SNSCC tissues compared to NP tissues ( A). Based on pan-cancer analysis, PYCR1 was consistently expressed across various cancer types. Further review of work in the literature examined the role of PYCR1 in different types of cancer. Higher PYCR1 expression was associated with various clinicopathological features, such as metastasis and advanced tumor stage . The biological consequences indicated that PYCR1 affected key hallmarks of cancer, including cell proliferation, anti-apoptosis, and metastasis. Moreover, higher PYCR1 was associated with a worse prognosis. These findings suggested that PYCR1 may serve as a potential oncogene, prognostic biomarker, and therapeutic target for various cancers. As previously indicated, overexpression of PYCR1 was associated with various clinicopathological features and prognostic biomarkers. Therefore, we then examined whether PYCR1 expression was associated with clinicopathological features in SNSCC patients. We divided SNSCC patients into a high PYCR1 expression group (n = 27) and a low PYCR1 expression group (n = 27) according to the median relative expression of SNSCC. However, PYCR1 expression was not found to be associated with age, gender, cell differentiation, sub-type, and invasion . To evaluate the prognostic significance of PYCR1 in SNSCC, the R software version 4.3.3 package maxstat was used to determine the optimal cutoff. A Kaplan–Meier analysis with a log-rank test was conducted based on this optimal cutoff, which corresponded to a 4.45-fold change in PYCR1 expression. The results revealed that high PYCR1 expression was significantly associated with poor overall survival compared to low expression (low expression vs. high expression = 27.30 months vs. 10.74 months, HR = 2.40, p = 0.0137; ). In this study, we integrated proteomic profiles and ML algorithms of SNSCC and NP for a robust classification of these diagnostically challenging tumors. The PCA analysis and volcano plot show different molecularly expressed proteins between NP and SNSCC. Recently, a comprehensive characterization study has been published to identify the signature marker of sinonasal cancer based on epigenetic data . Protein alteration, methylation, and genetic mutations could be identified as potential clinical biomarkers. Recently, ML has been applied in the field of proteomic analysis to identify important factors in cancer . A proteomic-based ML algorithm can serve as a robust tool for classifying prostate cancer from serum and urine samples, which have more than 80% sensitivity and specificity . The dysregulation of proteins allows us to identify the potential classifiers for differentiating between SNSCC and NP using an ML model. In these studies, we demonstrated that the proteomic-based ML classification algorithm can differentiate between SNSCC and NP with more than 70% accuracy. We also found that RF had the highest performance of the ML models tested. Thus, proteomic-based ML classification algorithms can help clinicians and scientists identify SNSCC. However, the ML classification algorithm was not hyperparameter-tuned, which may contribute to suboptimal performance on new datasets . While our proteomic-based ML algorithm model shows promising results, it has limitations that should be noted. Firstly, the significant age difference between the NP and SNSCC groups may influence the study results, as age is a known risk factor for cancer due to increased genetic mutations and environmental exposures. This difference could affect biomarker identification, as proteomic profiles may reflect age-related changes rather than disease-specific differences. To address this limitation, future studies should validate biomarkers in age-matched cohorts and consider statistical adjustments for age to separate its effects from disease-specific proteomic changes. Secondly, the small sample size of the test set (n = 18) increases variability and uncertainty in performance metrics, as evidenced by wide 95% confidence intervals (95% CIs). Larger datasets and external validation are needed for more reliable, generalizable results and clinical applicability. Thirdly, the application of FFPE-based proteomics methodologies is promising for biomarker discovery. However, FFPE-based proteomics is challenging due to sample quantities, formalin-induced cross-links, and low-abundance protein identification . Our dataset utilized 30 individual nasal tissue samples across three dependent experiments, which helped to improve model robustness and account for a broader range of data variations to address the challenges associated with FFPE-based proteomics. To date, there is a lack of comprehensive data on SNSCC patients due to the tumor rarity. Previous studies have mainly focused on mutations as biomarkers for SNSCC . Since SNSCC is a subset of head and neck cancer, we applied the comprehensive TCGA database, mainly in the HNSC dataset, to identify potential tumor-associated markers. Our analysis highlighted the dysregulation of both MYO1B and PYCR1 in different cancer types, especially in the HNSC dataset. Specifically, the expression of MYO1B has been found in various cancer types, such as colorectal cancer and cervical cancer, where it contributes to cell migration, invasion, and metastasis . This biological consequence was associated with the regulation of the actin cytoskeleton and glycolysis. However, the function of MYO1B in SNSCC is currently unclear. Our RT-qPCR analysis shows that MYO1B mRNA levels were not significantly different between SNSCC and NP tissues. However, proteomic analysis shows that MYO1B levels were significantly upregulated in SNSCC compared to NP tissues. Although there was no correlation between MYO1B mRNA and protein levels, post-transcriptional and post-translational mechanisms may play a key role in regulating MYO1B expression in SNSCC. Epigenetic alteration of miR-145-3p and miR-363 was found to control the expression of the MYO1B gene in head and neck squamous cell carcinoma, which, in turn, led to increased migration and invasion of cancer cells . PYCR1 is an enzyme that plays a crucial role in proline biosynthesis. It converts pyrroline-5-carboxylate to proline, an important mechanism for cellular metabolism, stress response, and protein synthesis . Works in the literature and pan-cancer analyses indicate that PYCR1 is commonly upregulated in various cancers, including kidney adenocarcinoma, gastric cancer, lung cancer, pancreatic ductal adenocarcinoma, renal cell carcinoma, breast cancer, and hepatocellular carcinoma. In agreement with previous studies, it was indicated that PYCR1 was the most frequently overexpressed metabolic gene across pan-cancer . Silencing of PYCR1 could inhibit cell proliferation, and invasion and enhance the chemosensitivity to doxorubicin in breast cancer cell lines . Additionally, epigenetic alteration of miR-488 was found to negatively regulate PYCR1 expression, leading to inhibition of cell proliferation and tumorigenesis in non-small cell lung . The downstream effects of PYCR1 included the induction of cell proliferation and migration via JAK–STAT3, PI3K/Akt, and Akt–mTOR pathways . Overall findings indicate that PYCR1 plays an important role in tumor initiation and progression. However, the expression of PYCR1 in SNSCC is still unclear. To the best of our knowledge, our study is the first that integrated proteomic analysis with ML to identify tumor-associated markers and found that the PYCR1 protein was used as a common feature protein in four ML models. Moreover, we confirmed the expression of the PYCR1 gene by using RT-qPCR. PYCR1 mRNA was highly expressed in SNSCC compared with NP tissues, consistent with the findings in the TCGA with head and neck squamous cell carcinoma database. It is confirmed that PYCR1 is significantly overexpressed in SNSCC, suggesting that it plays an important role in tumorigenesis and could be a promising tumor-associated marker in SNSCC. In future studies, PYCR1 could be applied in immunohistochemistry (IHC) to enhance its clinical utility and improve accessibility in routine diagnostics and prognostic evaluations. Additionally, integrating PYCR1 with other biomarkers, such as EGFR mutations, which are used to predict prognosis in SNSCC, could provide a more comprehensive prognostic tool. Previous research shows that higher PYCR1 expression is associated with a worse prognosis in different cancers , which is in agreement with our result that shows high expression of PYCR1 is significantly associated with poor prognosis of SNSCC. Consequently, this finding shows that high expression of PYCR1 can be used as a tumor-associated biomarker to predict the prognosis of SNSCC. However, further studies are needed to elucidate the underlying mechanisms of PYCR1 in SNSCC. 4.1. Sample Collection Left-over FFPE specimens of SNSCC (n = 62) and NP (n = 53) were retrospectively identified from surgical pathology records databases at Srinagarind Hospital, Khon Kaen University, Khon Kaen, Thailand. NP, benign growths with chronic inflammation, was used as nasal tissue control. The sample size (n = 55) was calculated based on the case-control study of Lareo et al., 1992 . All experiments were performed in accordance with the approved guidelines of the Khon Kaen University Ethics Committee for Human Research based on the Declaration of Helsinki and the ICH Good Clinical Practice Guidelines (HE611288, 19 June 2018 and HE671297, 20 May 2024). Due to the retrospective nature of the study, The Khon Kaen University Ethics Committee for Human Research waived the need of obtaining informed consent. 4.2. Trypsin-Digested Peptides, LC–MS/MS, and Data Analysis A total of 30 FFPE samples of NP (n = 16) and SNSCC (n = 14) were sectioned onto tissue slides. The tissue slides were deparaffinized using xylene. Protein extraction was performed on the tumor tissue samples using Qproteome FFPE extraction kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. The protein samples were stored at −20 °C until use. To purify the protein prior to liquid chromatography–mass spectrometry (LC–MS/MS) analysis, 750 µL of chilled acetic acid was added to the protein samples, which were then incubated overnight at −20 °C. The protein pellets were collected by centrifugation at 9000× g for 15 min and resuspended in 10 mM NH 4 HCO 3 . The protein quantity was measured using Lowry’s method. For trypsin-digested peptides, 4 µg of each extracted protein sample was prepared and added into a 1.5 mL tube, then dried using a Speed Vac and resuspended with 5 µL of 10 mM NH 4 HCO 3 . Twenty microliters of 5 mM DTT/10 mM NH 4 HCO 3 were added to each sample tube and incubated at 56° C for 1 h. Next, 20 µL of 15 mM IAA/10 mM NH 4 HCO 3 were added and incubated in the dark at room temperature for 1 h. A total of 4 µL of 50 ng/µL trypsin in 10 mM NH 4 HCO 3 were added and incubated overnight at 37 °C. Finally, the digested samples were dried using Speed Vac, and the peptides were resuspended with 0.1% formic acid. The trypsin-digested peptides were injected into the LC–MS/MS analyzer with three dependent experiments (Hybrid quadrupole Q-TOF impact II™, Bruker Daltonics, Billerica, MA, USA). The peptides were separated by using the Ultimate3000 Nano/Capillary LC System (Thermo Scientific, Waltham, MA, USA) coupled with Nano-captive spray ion source. LC–MS/MS analysis and protein quantification were performed as previously described . Briefly, LC–MS/MS raw data files were analyzed using DeCyderMS 2.0 differential analysis software (GE Healthcare Life Science, Amersham, UK) and subjected to Mascot software version 2.7.0 (accessed in May 2020). (Matrix Science, London, UK) to search the proteins name and proteins score based on the NCBI database with the following parameter; Homo sapiens (AA) database; trypsin enzyme; allowed up to three missed cleavage; carbamidomethyl (C) as fixed modification and oxidation (M) as variable modification; peptide charge state of +1, +2, +3; ESI-QUAD-TOF instrument type and report 1000 top hits. Mascot dat. files were imported to the Decyder PepMatch 2.0 software (accessed in May 2020). The MS data were exported to the text file. 4.3. Principal Component Analysis and Identification of Differentially Expressed Genes For PCA analysis, 30 individual nasal tissue samples with three dependent experiments (total 90 datasets; NP = 48 datasets; SNSCC = 42 datasets) using the intensity of each protein’s dataset were constructed by R programming with ggplot2 and plotly packages. To identify the differentially expressed proteins, the relative protein expression values were compared between the NP and SNSCC groups. The proteins were differentially expressed if the relative protein expressions were >±2 ratio of log2 (SNSCC/NP) intensity, with a p -value < 0.05, which was statistically analyzed by a paired t -test. For multiple comparisons, the Benjamini–Hochberg (BH) procedure was applied to control the false discovery rate (FDR). The adjusted p -values, or q-values, were reported. We used a volcano plot to display the differentially expressed proteins by using R programming with ggplot2 package, where the x -axis represents the log2-based fold change, and the y -axis represents the negative log 10 of the p -value calculated from the two-tailed t -test. 4.4. Machine Learning Models The workflow of this study is shown in . Support vector machine (SVM), logistic regression (LR), random forest (RF), and gradient boost (GB) classifiers were developed to predict tumor markers based on proteomic profiles of SNSCC and NP, using Python libraries Pandas, NumPy, Matplotlib, and Scikit-learn. The differentially up-regulated proteins in SNSCC were used as an input dataset. Ninety datasets of proteomic profiles were deduplicated, scaled, and grouped into three replicates. LOGO-CV was applied to the entire dataset and aggregated predictions and true labels for all iterations of LOGO-CV. Subsequently, performance metrics, such as accuracy, were then averaged across these iterations. To extract the featured protein from each ML, the dataset was randomly split into a training set (n = 72) and a validation set (n = 18), containing 80% and 20%, respectively. The training set was used to train four ML models, and their performance was evaluated on the validation set using a confusion matrix such as accuracy, sensitivity, specificity, and precision. Feature proteins for each model were identified during this process. Ninety-five percent CIs were calculated using the epiR package. The intersection of feature proteins was selected using the jvenn online tool ( http://jvenn.toulouse.inrae.fr/app/example.html , accessed on 10 February 2024). 4.5. In Silico Analysis of PYCR1 and MYO1B Gene Expression Based on Pan-Cancer Database The pan-cancer gene expression data, HNSC-TCGA dataset with stage plot and survival rate were analyzed by employing GEPIA2 ( http://gepia2.cancer-pku.cn/#index , accessed on 24 February 2024) , a web server for gene expression analysis based on the RNA-seq data of 9736 tumors and 8587 normal samples from the The Cancer Genome Atlas (TCGA). 4.6. Relative Gene Expression by qRT-PCR A total of 115 FFPE samples of NP (N = 53) and SNSCC (N = 62) were used for validation. Total RNA was extracted using a High Pure RNA Paraffin Kit (Roch, Mannheim, Germany), according to the manufacturer’s instructions. Total RNA was synthesized to cDNA according to the manufacturer’s protocol (RevertAid H minus First Strand cDNA Synthesis Kit, ThermoFisher Scientifics, Waltham, MA, USA). To determine relative gene expression, gene expression was analyzed by qRT-PCR using SsoAdvancedTM SYBR ® Green SuperMix (Bio-Rad, Hercules, CA, USA) in QuantStudio™ 6 Flex Real-Time PCR (Applied Biosystems, Foster City, CA, USA). GAPDH was used as an internal control. The relative expression level of targeted mRNA was determined using the comparative CT method (2 −ΔΔCT method). The primers used in the present study are listed in . 4.7. Statistical Analysis GraphPad Prism 9 software (GraphPad Software Inc., San Diego, CA, USA) was used for data analysis. The relative gene expression data were analyzed using nonparametric tests with Mann–Whitney test. The optimal cutoff value for categorizing patients into high and low PYCR1 expression groups was determined using the maxstat R package. Survival analysis of PYCR1 expression in SNSCC (n = 49) patients was analyzed using the Kaplan–Meier method with a log-rank test. Fisher exact test was used to evaluate the relationship between PYCR1 group and clinicopathological features. All statistical tests were two-sided. p -value of <0.05 was considered statistically significant. Left-over FFPE specimens of SNSCC (n = 62) and NP (n = 53) were retrospectively identified from surgical pathology records databases at Srinagarind Hospital, Khon Kaen University, Khon Kaen, Thailand. NP, benign growths with chronic inflammation, was used as nasal tissue control. The sample size (n = 55) was calculated based on the case-control study of Lareo et al., 1992 . All experiments were performed in accordance with the approved guidelines of the Khon Kaen University Ethics Committee for Human Research based on the Declaration of Helsinki and the ICH Good Clinical Practice Guidelines (HE611288, 19 June 2018 and HE671297, 20 May 2024). Due to the retrospective nature of the study, The Khon Kaen University Ethics Committee for Human Research waived the need of obtaining informed consent. A total of 30 FFPE samples of NP (n = 16) and SNSCC (n = 14) were sectioned onto tissue slides. The tissue slides were deparaffinized using xylene. Protein extraction was performed on the tumor tissue samples using Qproteome FFPE extraction kit (Qiagen, Hilden, Germany), according to the manufacturer’s instructions. The protein samples were stored at −20 °C until use. To purify the protein prior to liquid chromatography–mass spectrometry (LC–MS/MS) analysis, 750 µL of chilled acetic acid was added to the protein samples, which were then incubated overnight at −20 °C. The protein pellets were collected by centrifugation at 9000× g for 15 min and resuspended in 10 mM NH 4 HCO 3 . The protein quantity was measured using Lowry’s method. For trypsin-digested peptides, 4 µg of each extracted protein sample was prepared and added into a 1.5 mL tube, then dried using a Speed Vac and resuspended with 5 µL of 10 mM NH 4 HCO 3 . Twenty microliters of 5 mM DTT/10 mM NH 4 HCO 3 were added to each sample tube and incubated at 56° C for 1 h. Next, 20 µL of 15 mM IAA/10 mM NH 4 HCO 3 were added and incubated in the dark at room temperature for 1 h. A total of 4 µL of 50 ng/µL trypsin in 10 mM NH 4 HCO 3 were added and incubated overnight at 37 °C. Finally, the digested samples were dried using Speed Vac, and the peptides were resuspended with 0.1% formic acid. The trypsin-digested peptides were injected into the LC–MS/MS analyzer with three dependent experiments (Hybrid quadrupole Q-TOF impact II™, Bruker Daltonics, Billerica, MA, USA). The peptides were separated by using the Ultimate3000 Nano/Capillary LC System (Thermo Scientific, Waltham, MA, USA) coupled with Nano-captive spray ion source. LC–MS/MS analysis and protein quantification were performed as previously described . Briefly, LC–MS/MS raw data files were analyzed using DeCyderMS 2.0 differential analysis software (GE Healthcare Life Science, Amersham, UK) and subjected to Mascot software version 2.7.0 (accessed in May 2020). (Matrix Science, London, UK) to search the proteins name and proteins score based on the NCBI database with the following parameter; Homo sapiens (AA) database; trypsin enzyme; allowed up to three missed cleavage; carbamidomethyl (C) as fixed modification and oxidation (M) as variable modification; peptide charge state of +1, +2, +3; ESI-QUAD-TOF instrument type and report 1000 top hits. Mascot dat. files were imported to the Decyder PepMatch 2.0 software (accessed in May 2020). The MS data were exported to the text file. For PCA analysis, 30 individual nasal tissue samples with three dependent experiments (total 90 datasets; NP = 48 datasets; SNSCC = 42 datasets) using the intensity of each protein’s dataset were constructed by R programming with ggplot2 and plotly packages. To identify the differentially expressed proteins, the relative protein expression values were compared between the NP and SNSCC groups. The proteins were differentially expressed if the relative protein expressions were >±2 ratio of log2 (SNSCC/NP) intensity, with a p -value < 0.05, which was statistically analyzed by a paired t -test. For multiple comparisons, the Benjamini–Hochberg (BH) procedure was applied to control the false discovery rate (FDR). The adjusted p -values, or q-values, were reported. We used a volcano plot to display the differentially expressed proteins by using R programming with ggplot2 package, where the x -axis represents the log2-based fold change, and the y -axis represents the negative log 10 of the p -value calculated from the two-tailed t -test. The workflow of this study is shown in . Support vector machine (SVM), logistic regression (LR), random forest (RF), and gradient boost (GB) classifiers were developed to predict tumor markers based on proteomic profiles of SNSCC and NP, using Python libraries Pandas, NumPy, Matplotlib, and Scikit-learn. The differentially up-regulated proteins in SNSCC were used as an input dataset. Ninety datasets of proteomic profiles were deduplicated, scaled, and grouped into three replicates. LOGO-CV was applied to the entire dataset and aggregated predictions and true labels for all iterations of LOGO-CV. Subsequently, performance metrics, such as accuracy, were then averaged across these iterations. To extract the featured protein from each ML, the dataset was randomly split into a training set (n = 72) and a validation set (n = 18), containing 80% and 20%, respectively. The training set was used to train four ML models, and their performance was evaluated on the validation set using a confusion matrix such as accuracy, sensitivity, specificity, and precision. Feature proteins for each model were identified during this process. Ninety-five percent CIs were calculated using the epiR package. The intersection of feature proteins was selected using the jvenn online tool ( http://jvenn.toulouse.inrae.fr/app/example.html , accessed on 10 February 2024). The pan-cancer gene expression data, HNSC-TCGA dataset with stage plot and survival rate were analyzed by employing GEPIA2 ( http://gepia2.cancer-pku.cn/#index , accessed on 24 February 2024) , a web server for gene expression analysis based on the RNA-seq data of 9736 tumors and 8587 normal samples from the The Cancer Genome Atlas (TCGA). A total of 115 FFPE samples of NP (N = 53) and SNSCC (N = 62) were used for validation. Total RNA was extracted using a High Pure RNA Paraffin Kit (Roch, Mannheim, Germany), according to the manufacturer’s instructions. Total RNA was synthesized to cDNA according to the manufacturer’s protocol (RevertAid H minus First Strand cDNA Synthesis Kit, ThermoFisher Scientifics, Waltham, MA, USA). To determine relative gene expression, gene expression was analyzed by qRT-PCR using SsoAdvancedTM SYBR ® Green SuperMix (Bio-Rad, Hercules, CA, USA) in QuantStudio™ 6 Flex Real-Time PCR (Applied Biosystems, Foster City, CA, USA). GAPDH was used as an internal control. The relative expression level of targeted mRNA was determined using the comparative CT method (2 −ΔΔCT method). The primers used in the present study are listed in . GraphPad Prism 9 software (GraphPad Software Inc., San Diego, CA, USA) was used for data analysis. The relative gene expression data were analyzed using nonparametric tests with Mann–Whitney test. The optimal cutoff value for categorizing patients into high and low PYCR1 expression groups was determined using the maxstat R package. Survival analysis of PYCR1 expression in SNSCC (n = 49) patients was analyzed using the Kaplan–Meier method with a log-rank test. Fisher exact test was used to evaluate the relationship between PYCR1 group and clinicopathological features. All statistical tests were two-sided. p -value of <0.05 was considered statistically significant. Our study utilized proteomic and ML approaches to identify potential biomarkers for sinonasal squamous cell carcinoma (SNSCC). Seventeen feature proteins were found in all models. PYCR1 was validated as a significant SNSCC marker through RT-qPCR, and its high expression correlated with poor overall patient survival, suggesting PYCR1 could serve as a tumor-associated prognostic biomarker for SNSCC.
Medicina familiar y comunitaria: la especialidad más elegida en el MIR
171658f0-2542-4263-aac3-5068ae5be0a0
11016859
Family Medicine[mh]
El deseo anunciado en el CIT, de incrementar por la vía de urgencia la oferta formativa en MFYC en 1.000 plazas más, va por un camino equivocado. La actual capacidad formativa, en las condiciones estructurales y asistenciales de nuestra AP, está al límite y ampliarla, en la cuantía que se plantea, devaluando los requisitos de acreditación de unidades y centros docentes, no producirá el efecto buscado y degradará más la formación y la AP del futuro. No puedo evitar ver, en esta medida desesperada, al tren de la famosa película de los hermanos Marx en el oeste, y escuchar el resonar de su célebre frase: ¡más madera! Queman los vagones del tren para alimentar una máquina que al final ya no tendrá nada que transportar. El problema hoy no está en la oferta, 2.492 plazas de formación es prácticamente la cantidad que desde hace años Verónica Casado , expresidenta de la Comisión Nacional de la Especialidad, venía reclamando desde hace años para evitar el déficit que ahora tenemos. La oferta de plazas de formación en MFYC, a nivel global, hace tiempo que está dando signos evidentes de agotamiento, superando a la demanda de la misma. Por lo tanto, las medidas que se adopten deben dirigirse, en tal caso, a estimular la demanda de la especialidad y el deseo de permanecer en esa medicina de familia que hace AP. El foco debe ponerse en aumentar el atractivo de la AP y su capacidad para retener a los profesionales en la misma . Para profundizar en el análisis de esta situación no eludiremos considerar los aspectos vocacionales de los graduados en medicina, el contenido de la especialidad de MFYC y su programa formativo, la estructura, la organización y la capacidad de su red de unidades docentes, la presencia y las aportaciones necesarias de la MFYC a la universidad y, finalmente, el principal de ellos que es la situación de la AP. Hemos de rechazar de plano que la medicina de familia sea una especialidad poco deseada. Es una obviedad que medicina de familia es la especialidad más elegida y con un grado de preferencia comparada con otras especialidades que, a igualdad de nota, oscila entre el 50 y el 75% sobre el conjunto de las especialidades ofertadas, tal como nos muestra el análisis de Yoseba Cánovas et al. , . Atribuir que eso es así, solo al efecto de la oferta o a que graduados con un nivel académico altísimo se acogen a un clavo ardiendo en ausencia de otras opciones, es erróneo. Existen diversidad de estudios sobre los aspectos vocacionales de los estudiantes que eligen medicina y también, posteriormente, medicina de familia , . En la mayor parte de ellos aparece que la fuerza motivacional que impulsa a estos estudiantes es el deseo de ayudar a los demás. Bien es cierto, como hace años nos mostró Koldo Totorika et al., que el contacto de estos estudiantes con la profesión, en general en su versión hospitalaria, los sensibiliza de diversas formas y atenúa en ellos ese impulso vocacional inicial ganando peso la identificación con los valores y las conductas observados durante su aprendizaje, muy sensibles a las condiciones organizativas y de la práctica profesional, que en la actualidad en nuestro país están en buena medida pervertidas . Quienes tenemos un largo recorrido en labores de formación especializada en medicina de familia constatamos, año tras año, 2 sentimientos opuestos que confluyen en cada residente, la gran identificación con los valores de la especialidad, y las competencias a través de las cuales esos valores se expresan y la emoción que sienten cuando las incorporan a su propio arsenal de capacidades. Pero a su vez muestran frustración y sentimiento impotencia al constatar que mucho de lo aprendido ni es valorado ni podrá ser aplicado en la vida real de especialista dada la situación actual de la AP. Otro aspecto a tener en cuenta es que la medicina de familia es una especialidad muy versátil y sus capacidades resultan atractivas en otros ámbitos clínicos y no clínicos. No solo los médicos de urgencias son mayoritariamente médicos de familia, también lo son los de cuidados paliativos o los de las unidades de hospitalización a domicilio, o los médicos que en los hospitales ayudan a las distintas especialidades quirúrgicas, cada vez menos interesadas por la clínica, a atender a sus pacientes hospitalizados. Otros se dedican a funciones técnicas en unidades de gestión, otros son empleados en mutuas laborales y así un largo etcétera. Esta versatilidad, en principio positiva, hace que la planificación de necesidades de profesionales para la AP se haga compleja, no solo en cuanto a determinar cuántos especialistas formar en función de las mismas, sino porque existen muchas vías de escape de profesionales incentivadas por las malas condiciones de laborales y de gestión que desde hace muchos años se dan en AP. Los programas de las especialidades definen el mapa de competencias de las mismas, orientan sobre el modo en el que estas van a ser adquiridas, así como sobre la estructura asistencial y docente necesaria para ello. Son el mapa para el viaje a la especialización, pero también su amalgama. El programa de MFYC vigente fue implantado en el año 2005 . Su diseño y puesta en marcha en aquella época fue todo un hito de creatividad participativa y de adhesión de la profesión al mismo. Incorporó el concepto de valor, se estructuró por competencias, prioridades y niveles de responsabilidad, orientó sobre cómo organizar los itinerarios formativos, estableció un amplio cuerpo de formación complementaria y la existencia de diversas figuras docentes, además del tutor. La aplicación del mismo durante estos 19 años nos ha permitido aprender mucho sobre metodología educativa y también sobre sus fortalezas y sus debilidades. Sin pretenderlo, en la realidad de su aplicación, nos conduce a una formación bastante atomizada, sin nexos de unión integradores claros y con demasiada dependencia del hospital. Contiene todo un conjunto de estancias breves por distintos servicios y unidades de base hospitalaria lo que hace compleja la gestión competencial por más que está descrita en la función tutorial. Desde hace más de 3 años se intenta rediseñar dicho programa incorporando lo aprendido en todo este tiempo , pero, a diferencia del anterior, dicho proceso no está siendo participativo y está sufriendo intromisiones inexplicables que impiden que la Comisión Nacional de la Especialidad pueda trabajar con libertad, participación y transparencia, lo que está generando mucha disconformidad y desconfianza. La actual estructura formativa de la especialidad de MFYC se sustenta en lo que se denominan unidades docentes de esta especialidad. Estas unidades docentes son estructuras relativamente complejas que articulan en torno a sí a todo un conjunto de dispositivos asistenciales, fundamentalmente hospitales y centros de salud, y de profesionales que tienen la misión de ayudar a los médicos residentes a formarse en esta especialidad. La menor de estas unidades forma tantos residentes como el mayor servicio hospitalario de cualquier especialidad y la mayor de ellas recibe anualmente tantos médicos residentes como estudiantes alguna facultad de medicina. Esta gran heterogeneidad estructural y organizativa confluyen en la aplicación de un mismo programa de especialidad, pero dicha aplicación probablemente logra efectividades muy dispares que nunca han sido evaluadas. Otro aspecto a considerar es que bastantes de estas unidades y centros de salud docentes se encuentran muy alejados de las grandes ciudades, y de los servicios y oportunidades que estas proporcionan, lo que supone también una cortapisa más a la hora de elegir esta especialidad dado que muchos residentes rechazarán ir a vivir a un pueblo o una pequeña ciudad periférica o a tener que desplazarse decenas de kilómetros cada día para ir a trabajar. El efecto de la España vaciada también opera en la formación especializada, en lo que respecta a la medicina de familia. Toda esta estructura docente lleva años forzando su capacidad y probablemente se encuentra ya al límite o por encima de sus posibilidades. Es claro que el sistema de acreditación de unidades y centros docentes se puede mejorar y, hacerlo más ágil y menos engorroso, pero de nada servirá aceptar como válidas para la docencia estructuras o situaciones que en sus aspectos cuantitativos y cualitativos ya están cuestionadas para la asistencia . Todavía hoy un gran número de universidades españolas no incorporan la medicina de familia como una disciplina básica en la formación del grado y con funciones plenas en la misma. La medicina de familia sigue siendo una gran desconocida para muchos graduados en medicina lo que hace aún más sorprendente la cantidad de médicos que luego eligen esta especialidad. Atribuir a dicha elección únicamente al efecto de la oferta no resulta convincente. Para muchos graduados resulta muy atractiva la idea de ser médicos de personas y no solo de órganos y aparatos y rechazan la monotonía de dedicarse de por vida a un pequeño número de enfermedades o incluso a una sola. A sabiendas de la crisis por la que pasan actualmente las facultades de medicina y a pesar de ella, la medicina de familia reivindica, a través de sus sociedades científicas, la importancia de su integración plena en el grado, pero debería hacerlo reclamando que sea su estructura docente, toda la existente en torno a las unidades docentes, la que lo haga y, dada su magnitud, con las características de asignatura propia no opcional y que además colabore en la impartición de otras asignaturas. Como acertadamente se está reclamando, su peso en créditos formativos debería ser proporcional al peso que tiene en la oferta anual para la especialización. Esto permitiría potenciar el continuum formativo, fortalecería de estructura docente, tanto del grado como en la formación especializada, y ayudaría a reforzar la débil capacidad investigadora de la MFYC y la AP, tan necesaria en la generación de un conocimiento propio y genuino de esta especialidad. No es momento de atomizar y generar redes paralelas, es momento de unir esfuerzos y crear estructuras más robustas generadoras de conocimiento desde y para la especialidad. Sin embargo, las dificultades para que todo esto ocurra son muchas. En la universidad las parcelas de poder, los espacios, los créditos de formación y los presupuestos están ya ocupados y nadie está dispuesto a hacer hueco para que este cambio tan necesario se produzca. Será imprescindible modificar, mediante intervenciones normativas o legislativas, el marco actual que favorezca este y otros cambios que necesita la formación de grado en España y la contratación de profesores clínicos cualificados. La medicina de familia resulta atractiva para muchos graduados en medicina, lo que no lo es, es lo que viene después: malas condiciones laborales, eventualidad, sueldos no acordes al tiempo de formación invertido y la responsabilidad asumida, la ausencia de desarrollo profesional, los horarios, los turnos de trabajo, las dificultades para conciliar la vida familiar y personal, etc. Ese es el ecosistema laboral que les espera a los que cada año terminan su especialidad, y es que se incorporan a un barco a la deriva, sin dirección, que funciona por inercia o por reacción. Es la AP española. A pesar de los discursos y las manifestaciones en prensa, la AP no es una prioridad en España. Desde hace muchos años a la política le ha resultado mucho más rentable inaugurar hospitales, dotarles de altísima tecnología y dar manga ancha al consumo de medicamentos, que invertir en equidad , resolutividad , cuidados, cercanía, accesibilidad, longitudinalidad , formación, promoción de la salud o prevención no medicalizantes . Desde hace más de 20 años se vienen haciendo planes que nunca llegan a nada: AP21 , Marco estratégico para la AP de 2019 , Plan de Acción de Atención Primaria y Comunitaria 2022-2023 , y un largo etcétera de intentos poco convincentes y finalmente fracasados. La profesión reclama cambios sustanciales, participación y autonomía de gestión a la vista de que la politización creciente de la gestión no es eficaz y año tras año se deteriora más el sistema , , , . En esa situación las consultas se han convertido, en muchos casos, en lugares muy hostiles, que cada vez generan menos efectividades, y la presión asistencial aumenta ante el incremento de necesidades no resueltas. Este es el escenario de aprendizaje de los residentes, en el que cada vez es más frecuente escuchar el mantra de no hay tiempo, no se puede hacer eso que nos enseñan, —el método clínico centrado y la gestión clínica poblacional son ciencia ficción y no digamos la comunitaria— y se entra en un bucle pernicioso nada proclive a la excelencia, a la formación y a la satisfacción profesional. La inanición de la AP enferma a todo el sistema de salud, y afecta gravemente su capacidad resolutiva. Los pacientes acaban siendo atendidos en lugares y momentos inadecuados. El sistema, llamémoslo así, lejos de arreglar el problema le añade combustible, ¡más madera! Se tratan los síntomas del problema, pero no sus causas ni las causas de las causas. ¿Que los pacientes deciden acudir a las urgencias del hospital sin pasar por las de AP?, pues se hipertrofian las del hospital y se contratan médicos de familia para ellas; ¿que el final de la vida no se atiende en AP y muchos pacientes terminales se mueren en los pasillos de las urgencias o en las ambulancias camino de las mismas?, se crean unidades de cuidados paliativos hospitalarias atendidas por médicos de familia; ¿que la AP no tiene una atención domiciliaria de calidad?, se crean, con médicos de familia, unidades de hospitalización a domicilio para atenderlos; ¿que los médicos de familia ya no tienen tiempo para la escucha y la práctica reflexiva?, pongamos psicólogos en AP, etc. Estos caminos, estas decisiones a modo de parches, son las que están llevando a la insignificancia y a la desafección a la AP y paso a paso a la insostenibilidad del sistema. Vicente Ortún nos dice insistentemente y desde hace mucho tiempo que la AP es clave para la eficiencia y la sostenibilidad del sistema de salud, pero no esta AP inánime que tenemos . No solo los residentes o jóvenes médicos de familia huyen de la AP, desde hace tiempo lo hacen los pacientes y mucho antes ya lo hicieron los funcionarios y las clases dirigentes. Juan Simó lo viene denunciando con datos en su descremado sociológico de la AP , . Es una anomalía democrática que funcionarios, políticos y otra serie de élites se provean, con fondos públicos, de servicios sanitarios paralelos, diferentes a los del pueblo común y supuestamente de mayor calidad. Y en los últimos años, no solo ellos, sino que aquellas familias a las que su economía se lo permite se proveen de seguros de salud privados para defenderse de la mala calidad de los públicos. Si esto es así, y lo es, ¿nos sorprende que los residentes o los jóvenes especialistas abandonen el barco? La medicina de familia española no es la causa del problema, es una de sus consecuencias. Si no se actúa sobre las causas nada se resolverá, se quemará más madera, se quemarán recursos y profesionales y todo se seguirá deteriorando. Se pueden y se deben hacer varias cosas en paralelo, pero los dos factores clave para el éxito son el rediseño y la financiación de la AP y la presencia plena de la medicina de familia en la universidad, integrando toda su estructura docente en la misma, teniendo en cuenta que, en determinadas circunstancias, el orden de los factores sí altera el producto. Hasta que eso no suceda no será posible mejorar la estructura docente para la formación especialistas en MFYC, porque ésta es esencialmente asistencial, y tampoco se logrará el efecto deseado con la revisión del programa de la especialidad, por la misma razón. A la luz de algunas medidas ya adoptadas, existe el riesgo probable de que se pase de una actitud diletante en su peor acepción, a otra fanática, empecinada en redoblar esfuerzos cuando se han perdido de vista los objetivos y, si bien la situación actual es mala, siempre puede empeorar. Aunque ya es tarde, sigue siendo necesario abordar con decisión, audacia e inteligencia las causas de los problemas de nuestro Sistema de Salud ya que es un factor de equidad de primer orden y, dentro de él, su atención primaria como elemento clave para su efectividad y su sostenibilidad. El presente trabajo no es una investigación con humanos ni animales, sino un análisis de situación en base al conocimiento personal y al publicado en la bibliografía referenciada. El presente artículo ha sido elaborado con los medios propios del autor sin financiación externa. El autor manifiesta no tener intereses financieros, ni de relaciones personales que hayan podido influir en el contenido del presente artículo
Electrosprayed minocycline hydrochloride-loaded microsphere/SAIB hybrid depot for periodontitis treatment
372222d5-6da3-4df3-a046-c8805b707ac5
8008938
Pharmacology[mh]
Introduction Periodontal disease, a chronic inflammatory disease of the periodontium, often results in progressive damage of the surrounding alveolar bone (Mou et al., ; Munasur et al., ). This disease is not only the major cause of tooth loss in adults but also one of the two major menaces to the oral health (Nazir, ). The standard treatment for periodontitis is scaling and root planning (SRP); however, the success of SRP mainly depends on clinical skills and it cannot completely remove the bacteria that dwell deep in the periodontal pocket (Do et al., ; Nazir, ; Mou et al., ). Therefore, antibiotics are often combined with SRP to treat periodontitis (Pang et al., ). Minocycline hydrochloride (MINO), a semi-synthetic derivative tetracycline, has broader spectrum of antibacterial activity than other tetracycline antibiotics and has been frequently used to treat periodontal disease (Oliveira et al., ; Kashi et al., ). In addition to its antibacterial activity, MINO also exhibits pharmacological properties that are effective for the management of periodontitis (Nagasawa et al., ). MINO has been proven to restrain bone resorption and promote new bone formation. Pedro Sousa Gomes (Gomes & Fernandes, ) demonstrated that 1 µg/ml of MINO significantly improved the proliferation of human bone marrow osteoblastic cells. Furthermore, in our previous study, we demonstrated that appropriate concentration of MINO can upregulate the expression levels of Runt-related transcription factor 2 (Runx2), alkaline phosphatase (ALP), and osteopontin (OPN) in osteoblasts and increase the differentiation and mineralization of the osteoblasts of SD rats (Shao et al., ). However, high concentrations of MINO in the periodontal pockets may cause damage to the viable cells of the supportive periodontal tissue, especially the bone formation cells (osteoblasts) (Almazin et al., ). Salah M (Almazin et al., ) reported the harmful effects of MINO at a concentration of 0.5 mg/mL on osteoblast proliferation in vitro. Pedro Sousa Gomes (Gomes & Fernandes, ) also reported that high levels of MINO led to a dose-dependent deleterious influence on osteoblasts and delayed proliferation and differentiation. Therefore, recent studies have focused on local application associated with the minimal inhibitory concentration of MINO and a sustained-release device in order to avoid these adverse effects caused by high local concentration of antibiotics (Vandekerckhove et al., ). Nevertheless, the minocycline-loaded poly lactic-co-glycolic acid (PLGA) electrospun membrane that we had fabricated in our previous study had an obvious burst release (up to 20%) on the first day (Ma et al., ), thus a carrier loaded with MINO, which aims to further reduce burst release, should be explored in future research. Among the available sustained-release delivery systems, sucrose acetate isobutyrate (SAIB) is one of the most prospective systems with biodegradability and injectability, and it is usually considered safe by the U.S. Food and Drug Administration (FDA) (Wang et al., ; Harloff-Helleberg et al., ). Furthermore, the viscosity of the SAIB can be dramatically reduced when mixing with a little amount of solvent, for example, ethanol, allowing the SAIB to be easily inject using small needles. Upon injection, the solvent diffuses from the depot into body fluid, resulting in a highly viscous SAIB depot from where the drug can be released in a sustained manner (Wang et al., ; Harloff-Helleberg et al., ; Yang et al., ). However, burst release still existed in the application of SAIB (Park & Lee, ). Xia Lin’s study and our previous study showed that the combination of microsphere and SAIB could significantly decrease burst release of the investigated drugs (Lin et al., ; Yang et al., ). There are several methods to prepare microspheres, among which electrospray is a promising method with an extremely strict control on size distribution and high encapsulation efficiency for hydrophobic and hydrophilic drugs (Park & Lee, ; Furtmann et al., ). PLGA is approved by the FDA and has good biocompatibility and biodegradability, and the kinetics of drug release can be precisely controlled from days to months (Ford Versy et al., ; Zhang et al., ; Gu et al., ). Moreover, polyethylene glycol (PEG), as a hydrophilic material, was used to improve the monodispersity and encapsulation efficiency of the electrosprayed microspheres (Dan et al., ). In this study, we prepared MINO-loaded PLGA/PEG microspheres through the electrospray technique with an aim to evaluate their characteristics, such as morphology, size distribution, surface wettability, drug release, and drug degradation. MINO-loaded microspheres (MINO-microspheres) were mixed with SAIB (MINO-microsphere/SAIB; MINO-M-SAIB) in order to attain a slow and continuous release, and the alveolar bone augmentation potential of the fabricated MINO-M-SAIB hybrid depot was studied in SD rats with ligature-induced experimental periodontitis. Methods and materials 2.1. Materials PLGA (copolymer ratio 75:25, molecular weight 66,000–107,000) was purchased from Jinan Daigang Biomaterial Co. Ltd. (Shandong, China). Minocycline hydrochloride (MINO), Polyethylene glycol (PEG, Mn = 6 kDa), sucrose acetate isobutyrate (SAIB, density of 1.146 g/mL at 25 °C, MW = 846.91 g/mol) were purchased from Sigma-Aldrich (St. Louis, MO), and the MINO ointment (Periocline ® ) was procured from Sunstar Inc. (Osaka, Japan). For the cell culture of osteoblasts, alpha-modified Eagle’s medium (a-MEM, HyClone), antibiotics (Sigma), and fetal bovine serum (FBS, Gibco, Australia) were used. SD rats were obtained from the animal center of Chongqing Medical University, anti-osteoprotegerin Rabbit pAb (GB11151) and anti-RANKL Rabbit pAb (GB11235) were purchased from Servicebo (Wuhai, China). All chemicals used were of analytically pure grade. 2.2. Preparation of MINO-microspheres PLGA was dissolved in chloroform to form a solution at a concentration of 0.07 g/mL and PEG (5% w/w of PLGA) was added to it to prepare a PLGA/PEG solution. Then, different concentrations of MINO (0%, 10%, 12%, and 14% w/w relative to PLGA) were mixed with the PLGA/PEG solution. The resultant solutions were magnetically stirred at the room temperature for 2 h to obtain complete the dissolution. Then, during electrospraying using a single-nozzle electrospinning setup (Beijing Yongkang Leye Technology Development Co. Ltd., China), the solutions were contained in 5-mL syringes with 20-G needles and pushed at a steady speed of 0.9 ml/h to produce PLGA/PEG microspheres. Meanwhile, a voltage of 14 KV was applied between the needle and the aluminum foil (as the collector of microspheres), and the distance between the electrospraying needle and the aluminum foil was kept at 20 cm. Moreover, the relative humidity and temperature were strictly controlled at 30–35% and 20–25 °C, respectively. Finally, the collectors were placed in an incubator at 37 °C for 2 days to remove the residual solvent. Finally, the dry microspheres were refrigerated at −20 °C for further analysis. 2.3. Characterization of MINO-microspheres 2.3.1. Particle morphology Scanning electron microscopy (SEM; S-3000N, HITACHI, Japan) was used to analyze the morphology and size of MINO-microspheres at an accelerating voltage of 5 kV after gold coating. Moreover, the average diameters of the MINO-microspheres were measured by the ImageJ 2.0 analysis software. The coefficient of variation (CV) was used to assess the monodispersity of the MINO-microspheres and was calculated using the formula (1), meanwhile, the monodispersity of the microspheres increased with a decrease in the CV value. (1) CV = Standard deviation Mean particle size * 100 % 2.3.2. Drug-encapsulation efficacy (EE) and drug-loading efficiency (LE) The drug EE and LE experiments were conducted in accordance to the ultracentrifugation technique described in our previous study (Yang et al., ). The EE represented the amount of MINO entrapped in the microspheres; therefore, 3 mg of MINO-microspheres were dissolved in 1 mL of phosphate-buffered saline (PBS) and blended completely for 5 min to obtain a sample solution; the sample solution was then centrifuged (13000 rpm for 10 min) in a 10-K ultracentrifuge tube in order to deposit the free MINO on the surface of the microspheres. The supernatants were carefully extracted and analyzed at a wavelength of 350 nm using a UV-Vis spectrophotometer (ND-2000; Thermo Scientific). Similarly, the drug LE represented the total amount of MINO in the microspheres, which was determined by dissolving 3 mg of microspheres in 1 mL of absolute ethanol and sonicated for 10 min. The sample solution was allowed to stand until complete dissolution of the polymers was achieved, followed by centrifugation at 15000 rpm for 20 min. The supernatant was measured as described earlier. All experiments were performed in triplicate. The EE and LE of MINO were calculated using formulas (2) and (3), respectively: (2) EE = ( 1 − free MINO the amount of MINO in the microspheres ) * 100 % (3) LE = ( the amount of MINO in the microspheres the weight of microspheres ) * 100 % 2.3.3. Contact angle measurement The contact angle was measured by using a video constant angle device (VCA Optima, AST Inc.), and the aluminum foil collecting the microspheres was placed on the testing position and covered with a drop of deionized water (approximately 2 μL each time), followed by recording of the value of contact angle immediately. Each group included three samples, and the contact angle of each sample was calculated by determining three different locations. 2.3.4. Laser scanning confocal microscopy Under certain specific conditions, minocycline can emit yellow-green fluorescence (Dodiuk-Gad et al., ). Consequently, the samples of electrosprayed microspheres collected on glass slide were observed under a laser-scanning confocal microscope (TCS SP8X; Leica, Wetzlar, Germany) at an excitation wavelength of 375 nm. 2.3.5. Differential scanning calorimetry (DSC) DSC measurements were carried out using a DSC-Q2000 (TA Instruments). Accurately weighed 10-mg samples were placed in aluminum pans, and then sealed with an aluminum lid. A sealed empty pan was used as a reference. The heating rate was set to 10 °C/min from 30 °C to 210 °C under a dry nitrogen atmosphere (20 ml·min –1 ). 2.4. Preparation of minocycline hydrochloride/SAIB (MINO-SAIB) depot and minocycline hydrochloride-microsphere/SAIB (MINO-M-SAIB) hybrid depots SAIB was added to ethanol to form a transparent SAIB/ethanol (80/20, w/w) solution. Next, 1 mg of MINO was dispersed into SAIB/ethanol (80/20, w/w) system to prepare the minocycline depot. Equally, microspheres with three different drug loading were respectively dispersed into the SAIB/ethanol (80/20, w/w) solution through eddying for 5 min in order to obtain minocycline hybrid depots. The final drug loading of the prepared MINO-SAIB and MINO-M-SAIB depots were all 20 mg/g. 2.5. In vitro release Approximately 50 mg of the MINO-SAIB and MINO-M-SAIB depots (all containing 1 mg MINO) were injected into a 1.5-ml EP tube containing 1 mL of the release buffer (PBS solution, pH 7.4, 0.02% NaN3). In equal concentration, MINO-microspheres (containing 1 mg MINO) were dispersed into the PBS solution. All specimens were placed in the ZWY-110 × 30 reciprocal shaking water bath (Zhicheng Inc, China) at 37 °C. At each predetermined time point, the release buffer was collected by centrifuging (13,000 rpm for 10 min) and replaced with 1 mL of fresh PBS solution. Then, the release buffer solution was analyzed by UV-Vis spectrophotometer at a wavelength of 350 nm. At least three repeats for each sample group were conducted for all experiments. 2.6. In vitro degradation rate The in vitro degradation rate was determined by measuring the weight loss of microspheres. Then, 10 mg of microspheres were immersed in 1 mL of PBS and preserved in an incubator for up to 90 days at a constant temperature of 37 °C. At each time point (7, 15, 30, 45, 60, 75, and 90 days), the samples were removed from the medium, washed by distilled water, and then dried in an incubator at 37 °C for 2 days. Finally, the degradation rate was calculated using the , as follows: (5) Degradation rate = W 0 − Wt W 0 * 100 % where, W 0 is the initial weight of microspheres, W t is the dried weights of microspheres at time = t. Each sample group was tested thrice at least. 2.7. The porosity of depots When the SAIB solution was injected into the aqueous release medium, the solvent in the system diffused into the water phase, while the water phase diffused into the interior of the depot, which led to the formation of some water-rich micropores in the depot. Therefore, the porosity was reflected by measuring the volume of water diffused into the depot. Briefly, approximately 50 mg of the analyzed specimens were injected into 1 mL of release buffer (PBS solution) and placed on a shaker water bath at 37 °C; moreover, at these time points (0, 2, 8, 15, 30, and 45 days), the release buffer was completely removed, after which the absorbed water was removed by lyophilizing the depot. The volume of water absorbed by the depot was calculated by measuring the weight difference in the depot before and after lyophilization. Thus, the porosity was calculated by using formula (7) as shown below: (7) P = ( W 2 − W 3 ) / ρ 1 W 1 × C / ρ 2 + ( W 2 − W 3 ) / ρ 1 * 100 % where, p is the porosity; W 1 , W 2 , and W 3 are the initial weight of SAIB solution; and the weight of depot absorbing water and the weight of lyophilized depot, respectively. ρ 1 and ρ 2 are the density of water and of SAIB, respectively. C is the concentration of SAIB in the SAIB solution. Each sample group was repeated at least thrice for all experiments. 2.8. Fourier transform infrared spectroscopy (FTIR) FTIR (Thermo Scientific Nicolet iS5) was used to analyze the chemical composition of the MINO-microspheres, MINO-M-SAIB hybrid depot and their compositions, and the wave number range was 500–4000 cm −1 . The MINO, PEG, PLGA, MINO-microspheres, and MINO-M-SAIB hybrid depot samples were analyzed by attenuated total reflection (ATR), while the SAIB was analyzed by the transmission method. 2.9. Cytotoxicity of the drug delivery systems on osteoblastic cells 2.9.1. Cell culture SD rats (age: 2–3 days) were killed through cervical dislocation and then sterilized in 75% alcohol for 5 min, after which their cranial bones were collected and cultured for several days to obtain the osteoblast cells. The osteoblast cells were next cultured in a-MEM with 10% FBS and 100 U/mL antibiotics (penicillin-streptomycin-amphotericin) at 37 °C under 5% CO 2 atmosphere. The culture medium was replaced every second day. 2.9.2. Cck-8 assay The cytotoxicity of the MINO-microspheres, MINO-M-SAIB and MINO-SAIB depots was assessed by the CCK-8 assay. The MINO-microspheres, MINO-SAIB and MINO-M-SAIB depots (all containing 1 mg MINO) were respectively immersed in 1 ml of culture medium (containing a-MEM with 10% FBS and 100 U/mL antibiotics) and then placed in 37 °C incubator for 24 h to obtain their respective extracts. Meanwhile, the extracts were degermed through a 22-μm filter and then refrigerated at 4 °C for subsequent experiments. The osteoblast cells were seeded at a density of 2 × 10 4 cells/well in a 96-well plate. After 24 h of cell adhesion, the old medium was discarded and 100 μL of different extracts were added to treat the cells. Next, 100 μL of the culture medium (containing a-MEM with 10% FBS and 100 U/mL antibiotics) was used as the control. After 24 h, the cell culture medium was removed and substituted with 100 μL of fresh culture medium and 10 μL of CCK-8. After 3 h, the OD values were measured by an enzyme-linked immunosorbent assay (ELISA) plate reader at a wavelength of 450 nm (Bio-Tek, Winooski, VT). 2.10. Animal experiments 2.10.1. Establishment of periodontitis The experimental procedures were approved by the Ethics Committee of the Affiliated Stomatological Hospital of Chongqing Medical University (Approval no. [2019] 34), and 7-week-old female Sprague–Dawley rats (from the Institute of Experimental Animal Center of Chongqing Medical University) were randomly assigned into five groups ( n = 5): (1) Control (no ligation), (2) ligation, (3) ligation + M-SAIB, (4) ligation + MINO-M-SAIB, and (5) ligation + Periocline ® . In order to establish a periodontitis model, 10% chloral hydrate (4 mL/kg) was used to anesthetize the experimental rats via intraperitoneal injection and a 0.2-mm-diameter orthodontic steel wire was ligated around the first molar of the rat mandible for 4 weeks. Moreover, during this period, the rats were fed with 10% of sugar water and, if any ligature appeared to have loosened or fallen off, they were replaced immediately. These procedures were not conducted for the control group. Then, the orthodontic steel wires were removed, and the rats with periodontitis were left without treatment (the ligation group), while M-SAIB (approximately 50 mg) and MINO-M-SAIB (approximately 50 mg, 1 mg MINO) were injected into the periodontal pockets of rats with periodontitis immediately after the removal of the ligature or Periocline ® (approximately 8.33 mg, 0.17 mg MINO) was injected into the periodontal pockets once a week for 3 weeks and 6 weeks ( n = 5, at each time point). 2.10.2. Pharmacodynamic evaluation In order to evaluate the periodontal status, the most important clinical periodontal parameters of gingival index (GI), the periodontal pocket depth (PD), were recorded, and the status of periodontal tissues around the maxillary first molar were also recorded via photograph. Five rats per group were checked at specific time points (0, 2, 4, and 6 weeks). 2.10.3. Microcomputed tomography (micro-CT) After the ligatures were removed, the rats were killed by administering an overdose of anesthetic (e.g. chloral hydrate) at 3 and 6 weeks. The specimens (the alveolar bone including the first, second, and third molars of rat maxillary) were collected and fixed in 4% paraformaldehyde for micro-CT scanning (Viva CT40; SCANCO Medical, Bruttisellen, Switzerland) at a resolution of 15 mm, energy of 70 kV, and power of 114 mA. Linear measurements of the alveolar bone loss (ABL) were taken from the cement-enamel junction (CEJ) to the alveolar bone crest (ABC) at the distal and mesial roots of the maxillary first molar in two-dimensional (2-D) micro-CT images (Park Chan et al., ). For the volumetric analysis, the bone volume/tissue volume (BV/TV) parameters were assessed in a 3-D region of interest (ROI) by using the mimics analysis software. In addition, the ROI axially involved a rectangle with the length and width encompassing the entire crown and vertically involving the height from the CEJ to the apex of the maxillary first molar. Then, the first molar was removed and the residual bone volume in the ROI was analyzed. 2.10.4. Histology observation The alveolar bone samples were decalcified with 10% ethylenediaminetetraacetic acid (EDTA) for 1.5 months at the room temperature. Then, the sample was dehydrated, embedded in paraffin, and sliced along the mesio-distal direction of the tooth to obtain a 5-μm-thick tooth-periodontal section. The sagittal section was then stained with hematoxylin and eosin (H&E) (Solabao, Beijing, China) to evaluate the pathological conditions of the periodontal tissues. 2.10.5. Immunohistochemistry (IHI) The expression of the osteoclastic marker RANK ligand (RANKL) and the osteogenic marker osteogenic growth peptide (OPG) was assessed at 3 and 6 weeks. The IHI-stained images were analyzed and the values of average optical density (AOD) of the images were measured for quantitative analysis by the ImagePro Plus 6.0 software. 2.11. Statistical analyses All data were analyzed by the SPSS 20 software, and the data were expressed as the mean ± standard deviation. One-way analysis of variance (ANOVA), followed by Student–Newman–Keuls test, was employed to determine the statistical significance, with p < 0.05 considered to be statistically significant. Materials PLGA (copolymer ratio 75:25, molecular weight 66,000–107,000) was purchased from Jinan Daigang Biomaterial Co. Ltd. (Shandong, China). Minocycline hydrochloride (MINO), Polyethylene glycol (PEG, Mn = 6 kDa), sucrose acetate isobutyrate (SAIB, density of 1.146 g/mL at 25 °C, MW = 846.91 g/mol) were purchased from Sigma-Aldrich (St. Louis, MO), and the MINO ointment (Periocline ® ) was procured from Sunstar Inc. (Osaka, Japan). For the cell culture of osteoblasts, alpha-modified Eagle’s medium (a-MEM, HyClone), antibiotics (Sigma), and fetal bovine serum (FBS, Gibco, Australia) were used. SD rats were obtained from the animal center of Chongqing Medical University, anti-osteoprotegerin Rabbit pAb (GB11151) and anti-RANKL Rabbit pAb (GB11235) were purchased from Servicebo (Wuhai, China). All chemicals used were of analytically pure grade. Preparation of MINO-microspheres PLGA was dissolved in chloroform to form a solution at a concentration of 0.07 g/mL and PEG (5% w/w of PLGA) was added to it to prepare a PLGA/PEG solution. Then, different concentrations of MINO (0%, 10%, 12%, and 14% w/w relative to PLGA) were mixed with the PLGA/PEG solution. The resultant solutions were magnetically stirred at the room temperature for 2 h to obtain complete the dissolution. Then, during electrospraying using a single-nozzle electrospinning setup (Beijing Yongkang Leye Technology Development Co. Ltd., China), the solutions were contained in 5-mL syringes with 20-G needles and pushed at a steady speed of 0.9 ml/h to produce PLGA/PEG microspheres. Meanwhile, a voltage of 14 KV was applied between the needle and the aluminum foil (as the collector of microspheres), and the distance between the electrospraying needle and the aluminum foil was kept at 20 cm. Moreover, the relative humidity and temperature were strictly controlled at 30–35% and 20–25 °C, respectively. Finally, the collectors were placed in an incubator at 37 °C for 2 days to remove the residual solvent. Finally, the dry microspheres were refrigerated at −20 °C for further analysis. Characterization of MINO-microspheres 2.3.1. Particle morphology Scanning electron microscopy (SEM; S-3000N, HITACHI, Japan) was used to analyze the morphology and size of MINO-microspheres at an accelerating voltage of 5 kV after gold coating. Moreover, the average diameters of the MINO-microspheres were measured by the ImageJ 2.0 analysis software. The coefficient of variation (CV) was used to assess the monodispersity of the MINO-microspheres and was calculated using the formula (1), meanwhile, the monodispersity of the microspheres increased with a decrease in the CV value. (1) CV = Standard deviation Mean particle size * 100 % 2.3.2. Drug-encapsulation efficacy (EE) and drug-loading efficiency (LE) The drug EE and LE experiments were conducted in accordance to the ultracentrifugation technique described in our previous study (Yang et al., ). The EE represented the amount of MINO entrapped in the microspheres; therefore, 3 mg of MINO-microspheres were dissolved in 1 mL of phosphate-buffered saline (PBS) and blended completely for 5 min to obtain a sample solution; the sample solution was then centrifuged (13000 rpm for 10 min) in a 10-K ultracentrifuge tube in order to deposit the free MINO on the surface of the microspheres. The supernatants were carefully extracted and analyzed at a wavelength of 350 nm using a UV-Vis spectrophotometer (ND-2000; Thermo Scientific). Similarly, the drug LE represented the total amount of MINO in the microspheres, which was determined by dissolving 3 mg of microspheres in 1 mL of absolute ethanol and sonicated for 10 min. The sample solution was allowed to stand until complete dissolution of the polymers was achieved, followed by centrifugation at 15000 rpm for 20 min. The supernatant was measured as described earlier. All experiments were performed in triplicate. The EE and LE of MINO were calculated using formulas (2) and (3), respectively: (2) EE = ( 1 − free MINO the amount of MINO in the microspheres ) * 100 % (3) LE = ( the amount of MINO in the microspheres the weight of microspheres ) * 100 % 2.3.3. Contact angle measurement The contact angle was measured by using a video constant angle device (VCA Optima, AST Inc.), and the aluminum foil collecting the microspheres was placed on the testing position and covered with a drop of deionized water (approximately 2 μL each time), followed by recording of the value of contact angle immediately. Each group included three samples, and the contact angle of each sample was calculated by determining three different locations. 2.3.4. Laser scanning confocal microscopy Under certain specific conditions, minocycline can emit yellow-green fluorescence (Dodiuk-Gad et al., ). Consequently, the samples of electrosprayed microspheres collected on glass slide were observed under a laser-scanning confocal microscope (TCS SP8X; Leica, Wetzlar, Germany) at an excitation wavelength of 375 nm. 2.3.5. Differential scanning calorimetry (DSC) DSC measurements were carried out using a DSC-Q2000 (TA Instruments). Accurately weighed 10-mg samples were placed in aluminum pans, and then sealed with an aluminum lid. A sealed empty pan was used as a reference. The heating rate was set to 10 °C/min from 30 °C to 210 °C under a dry nitrogen atmosphere (20 ml·min –1 ). Particle morphology Scanning electron microscopy (SEM; S-3000N, HITACHI, Japan) was used to analyze the morphology and size of MINO-microspheres at an accelerating voltage of 5 kV after gold coating. Moreover, the average diameters of the MINO-microspheres were measured by the ImageJ 2.0 analysis software. The coefficient of variation (CV) was used to assess the monodispersity of the MINO-microspheres and was calculated using the formula (1), meanwhile, the monodispersity of the microspheres increased with a decrease in the CV value. (1) CV = Standard deviation Mean particle size * 100 % Drug-encapsulation efficacy (EE) and drug-loading efficiency (LE) The drug EE and LE experiments were conducted in accordance to the ultracentrifugation technique described in our previous study (Yang et al., ). The EE represented the amount of MINO entrapped in the microspheres; therefore, 3 mg of MINO-microspheres were dissolved in 1 mL of phosphate-buffered saline (PBS) and blended completely for 5 min to obtain a sample solution; the sample solution was then centrifuged (13000 rpm for 10 min) in a 10-K ultracentrifuge tube in order to deposit the free MINO on the surface of the microspheres. The supernatants were carefully extracted and analyzed at a wavelength of 350 nm using a UV-Vis spectrophotometer (ND-2000; Thermo Scientific). Similarly, the drug LE represented the total amount of MINO in the microspheres, which was determined by dissolving 3 mg of microspheres in 1 mL of absolute ethanol and sonicated for 10 min. The sample solution was allowed to stand until complete dissolution of the polymers was achieved, followed by centrifugation at 15000 rpm for 20 min. The supernatant was measured as described earlier. All experiments were performed in triplicate. The EE and LE of MINO were calculated using formulas (2) and (3), respectively: (2) EE = ( 1 − free MINO the amount of MINO in the microspheres ) * 100 % (3) LE = ( the amount of MINO in the microspheres the weight of microspheres ) * 100 % Contact angle measurement The contact angle was measured by using a video constant angle device (VCA Optima, AST Inc.), and the aluminum foil collecting the microspheres was placed on the testing position and covered with a drop of deionized water (approximately 2 μL each time), followed by recording of the value of contact angle immediately. Each group included three samples, and the contact angle of each sample was calculated by determining three different locations. Laser scanning confocal microscopy Under certain specific conditions, minocycline can emit yellow-green fluorescence (Dodiuk-Gad et al., ). Consequently, the samples of electrosprayed microspheres collected on glass slide were observed under a laser-scanning confocal microscope (TCS SP8X; Leica, Wetzlar, Germany) at an excitation wavelength of 375 nm. Differential scanning calorimetry (DSC) DSC measurements were carried out using a DSC-Q2000 (TA Instruments). Accurately weighed 10-mg samples were placed in aluminum pans, and then sealed with an aluminum lid. A sealed empty pan was used as a reference. The heating rate was set to 10 °C/min from 30 °C to 210 °C under a dry nitrogen atmosphere (20 ml·min –1 ). Preparation of minocycline hydrochloride/SAIB (MINO-SAIB) depot and minocycline hydrochloride-microsphere/SAIB (MINO-M-SAIB) hybrid depots SAIB was added to ethanol to form a transparent SAIB/ethanol (80/20, w/w) solution. Next, 1 mg of MINO was dispersed into SAIB/ethanol (80/20, w/w) system to prepare the minocycline depot. Equally, microspheres with three different drug loading were respectively dispersed into the SAIB/ethanol (80/20, w/w) solution through eddying for 5 min in order to obtain minocycline hybrid depots. The final drug loading of the prepared MINO-SAIB and MINO-M-SAIB depots were all 20 mg/g. In vitro release Approximately 50 mg of the MINO-SAIB and MINO-M-SAIB depots (all containing 1 mg MINO) were injected into a 1.5-ml EP tube containing 1 mL of the release buffer (PBS solution, pH 7.4, 0.02% NaN3). In equal concentration, MINO-microspheres (containing 1 mg MINO) were dispersed into the PBS solution. All specimens were placed in the ZWY-110 × 30 reciprocal shaking water bath (Zhicheng Inc, China) at 37 °C. At each predetermined time point, the release buffer was collected by centrifuging (13,000 rpm for 10 min) and replaced with 1 mL of fresh PBS solution. Then, the release buffer solution was analyzed by UV-Vis spectrophotometer at a wavelength of 350 nm. At least three repeats for each sample group were conducted for all experiments. In vitro degradation rate The in vitro degradation rate was determined by measuring the weight loss of microspheres. Then, 10 mg of microspheres were immersed in 1 mL of PBS and preserved in an incubator for up to 90 days at a constant temperature of 37 °C. At each time point (7, 15, 30, 45, 60, 75, and 90 days), the samples were removed from the medium, washed by distilled water, and then dried in an incubator at 37 °C for 2 days. Finally, the degradation rate was calculated using the , as follows: (5) Degradation rate = W 0 − Wt W 0 * 100 % where, W 0 is the initial weight of microspheres, W t is the dried weights of microspheres at time = t. Each sample group was tested thrice at least. The porosity of depots When the SAIB solution was injected into the aqueous release medium, the solvent in the system diffused into the water phase, while the water phase diffused into the interior of the depot, which led to the formation of some water-rich micropores in the depot. Therefore, the porosity was reflected by measuring the volume of water diffused into the depot. Briefly, approximately 50 mg of the analyzed specimens were injected into 1 mL of release buffer (PBS solution) and placed on a shaker water bath at 37 °C; moreover, at these time points (0, 2, 8, 15, 30, and 45 days), the release buffer was completely removed, after which the absorbed water was removed by lyophilizing the depot. The volume of water absorbed by the depot was calculated by measuring the weight difference in the depot before and after lyophilization. Thus, the porosity was calculated by using formula (7) as shown below: (7) P = ( W 2 − W 3 ) / ρ 1 W 1 × C / ρ 2 + ( W 2 − W 3 ) / ρ 1 * 100 % where, p is the porosity; W 1 , W 2 , and W 3 are the initial weight of SAIB solution; and the weight of depot absorbing water and the weight of lyophilized depot, respectively. ρ 1 and ρ 2 are the density of water and of SAIB, respectively. C is the concentration of SAIB in the SAIB solution. Each sample group was repeated at least thrice for all experiments. Fourier transform infrared spectroscopy (FTIR) FTIR (Thermo Scientific Nicolet iS5) was used to analyze the chemical composition of the MINO-microspheres, MINO-M-SAIB hybrid depot and their compositions, and the wave number range was 500–4000 cm −1 . The MINO, PEG, PLGA, MINO-microspheres, and MINO-M-SAIB hybrid depot samples were analyzed by attenuated total reflection (ATR), while the SAIB was analyzed by the transmission method. Cytotoxicity of the drug delivery systems on osteoblastic cells 2.9.1. Cell culture SD rats (age: 2–3 days) were killed through cervical dislocation and then sterilized in 75% alcohol for 5 min, after which their cranial bones were collected and cultured for several days to obtain the osteoblast cells. The osteoblast cells were next cultured in a-MEM with 10% FBS and 100 U/mL antibiotics (penicillin-streptomycin-amphotericin) at 37 °C under 5% CO 2 atmosphere. The culture medium was replaced every second day. 2.9.2. Cck-8 assay The cytotoxicity of the MINO-microspheres, MINO-M-SAIB and MINO-SAIB depots was assessed by the CCK-8 assay. The MINO-microspheres, MINO-SAIB and MINO-M-SAIB depots (all containing 1 mg MINO) were respectively immersed in 1 ml of culture medium (containing a-MEM with 10% FBS and 100 U/mL antibiotics) and then placed in 37 °C incubator for 24 h to obtain their respective extracts. Meanwhile, the extracts were degermed through a 22-μm filter and then refrigerated at 4 °C for subsequent experiments. The osteoblast cells were seeded at a density of 2 × 10 4 cells/well in a 96-well plate. After 24 h of cell adhesion, the old medium was discarded and 100 μL of different extracts were added to treat the cells. Next, 100 μL of the culture medium (containing a-MEM with 10% FBS and 100 U/mL antibiotics) was used as the control. After 24 h, the cell culture medium was removed and substituted with 100 μL of fresh culture medium and 10 μL of CCK-8. After 3 h, the OD values were measured by an enzyme-linked immunosorbent assay (ELISA) plate reader at a wavelength of 450 nm (Bio-Tek, Winooski, VT). Cell culture SD rats (age: 2–3 days) were killed through cervical dislocation and then sterilized in 75% alcohol for 5 min, after which their cranial bones were collected and cultured for several days to obtain the osteoblast cells. The osteoblast cells were next cultured in a-MEM with 10% FBS and 100 U/mL antibiotics (penicillin-streptomycin-amphotericin) at 37 °C under 5% CO 2 atmosphere. The culture medium was replaced every second day. Cck-8 assay The cytotoxicity of the MINO-microspheres, MINO-M-SAIB and MINO-SAIB depots was assessed by the CCK-8 assay. The MINO-microspheres, MINO-SAIB and MINO-M-SAIB depots (all containing 1 mg MINO) were respectively immersed in 1 ml of culture medium (containing a-MEM with 10% FBS and 100 U/mL antibiotics) and then placed in 37 °C incubator for 24 h to obtain their respective extracts. Meanwhile, the extracts were degermed through a 22-μm filter and then refrigerated at 4 °C for subsequent experiments. The osteoblast cells were seeded at a density of 2 × 10 4 cells/well in a 96-well plate. After 24 h of cell adhesion, the old medium was discarded and 100 μL of different extracts were added to treat the cells. Next, 100 μL of the culture medium (containing a-MEM with 10% FBS and 100 U/mL antibiotics) was used as the control. After 24 h, the cell culture medium was removed and substituted with 100 μL of fresh culture medium and 10 μL of CCK-8. After 3 h, the OD values were measured by an enzyme-linked immunosorbent assay (ELISA) plate reader at a wavelength of 450 nm (Bio-Tek, Winooski, VT). Animal experiments 2.10.1. Establishment of periodontitis The experimental procedures were approved by the Ethics Committee of the Affiliated Stomatological Hospital of Chongqing Medical University (Approval no. [2019] 34), and 7-week-old female Sprague–Dawley rats (from the Institute of Experimental Animal Center of Chongqing Medical University) were randomly assigned into five groups ( n = 5): (1) Control (no ligation), (2) ligation, (3) ligation + M-SAIB, (4) ligation + MINO-M-SAIB, and (5) ligation + Periocline ® . In order to establish a periodontitis model, 10% chloral hydrate (4 mL/kg) was used to anesthetize the experimental rats via intraperitoneal injection and a 0.2-mm-diameter orthodontic steel wire was ligated around the first molar of the rat mandible for 4 weeks. Moreover, during this period, the rats were fed with 10% of sugar water and, if any ligature appeared to have loosened or fallen off, they were replaced immediately. These procedures were not conducted for the control group. Then, the orthodontic steel wires were removed, and the rats with periodontitis were left without treatment (the ligation group), while M-SAIB (approximately 50 mg) and MINO-M-SAIB (approximately 50 mg, 1 mg MINO) were injected into the periodontal pockets of rats with periodontitis immediately after the removal of the ligature or Periocline ® (approximately 8.33 mg, 0.17 mg MINO) was injected into the periodontal pockets once a week for 3 weeks and 6 weeks ( n = 5, at each time point). 2.10.2. Pharmacodynamic evaluation In order to evaluate the periodontal status, the most important clinical periodontal parameters of gingival index (GI), the periodontal pocket depth (PD), were recorded, and the status of periodontal tissues around the maxillary first molar were also recorded via photograph. Five rats per group were checked at specific time points (0, 2, 4, and 6 weeks). 2.10.3. Microcomputed tomography (micro-CT) After the ligatures were removed, the rats were killed by administering an overdose of anesthetic (e.g. chloral hydrate) at 3 and 6 weeks. The specimens (the alveolar bone including the first, second, and third molars of rat maxillary) were collected and fixed in 4% paraformaldehyde for micro-CT scanning (Viva CT40; SCANCO Medical, Bruttisellen, Switzerland) at a resolution of 15 mm, energy of 70 kV, and power of 114 mA. Linear measurements of the alveolar bone loss (ABL) were taken from the cement-enamel junction (CEJ) to the alveolar bone crest (ABC) at the distal and mesial roots of the maxillary first molar in two-dimensional (2-D) micro-CT images (Park Chan et al., ). For the volumetric analysis, the bone volume/tissue volume (BV/TV) parameters were assessed in a 3-D region of interest (ROI) by using the mimics analysis software. In addition, the ROI axially involved a rectangle with the length and width encompassing the entire crown and vertically involving the height from the CEJ to the apex of the maxillary first molar. Then, the first molar was removed and the residual bone volume in the ROI was analyzed. 2.10.4. Histology observation The alveolar bone samples were decalcified with 10% ethylenediaminetetraacetic acid (EDTA) for 1.5 months at the room temperature. Then, the sample was dehydrated, embedded in paraffin, and sliced along the mesio-distal direction of the tooth to obtain a 5-μm-thick tooth-periodontal section. The sagittal section was then stained with hematoxylin and eosin (H&E) (Solabao, Beijing, China) to evaluate the pathological conditions of the periodontal tissues. 2.10.5. Immunohistochemistry (IHI) The expression of the osteoclastic marker RANK ligand (RANKL) and the osteogenic marker osteogenic growth peptide (OPG) was assessed at 3 and 6 weeks. The IHI-stained images were analyzed and the values of average optical density (AOD) of the images were measured for quantitative analysis by the ImagePro Plus 6.0 software. Establishment of periodontitis The experimental procedures were approved by the Ethics Committee of the Affiliated Stomatological Hospital of Chongqing Medical University (Approval no. [2019] 34), and 7-week-old female Sprague–Dawley rats (from the Institute of Experimental Animal Center of Chongqing Medical University) were randomly assigned into five groups ( n = 5): (1) Control (no ligation), (2) ligation, (3) ligation + M-SAIB, (4) ligation + MINO-M-SAIB, and (5) ligation + Periocline ® . In order to establish a periodontitis model, 10% chloral hydrate (4 mL/kg) was used to anesthetize the experimental rats via intraperitoneal injection and a 0.2-mm-diameter orthodontic steel wire was ligated around the first molar of the rat mandible for 4 weeks. Moreover, during this period, the rats were fed with 10% of sugar water and, if any ligature appeared to have loosened or fallen off, they were replaced immediately. These procedures were not conducted for the control group. Then, the orthodontic steel wires were removed, and the rats with periodontitis were left without treatment (the ligation group), while M-SAIB (approximately 50 mg) and MINO-M-SAIB (approximately 50 mg, 1 mg MINO) were injected into the periodontal pockets of rats with periodontitis immediately after the removal of the ligature or Periocline ® (approximately 8.33 mg, 0.17 mg MINO) was injected into the periodontal pockets once a week for 3 weeks and 6 weeks ( n = 5, at each time point). Pharmacodynamic evaluation In order to evaluate the periodontal status, the most important clinical periodontal parameters of gingival index (GI), the periodontal pocket depth (PD), were recorded, and the status of periodontal tissues around the maxillary first molar were also recorded via photograph. Five rats per group were checked at specific time points (0, 2, 4, and 6 weeks). Microcomputed tomography (micro-CT) After the ligatures were removed, the rats were killed by administering an overdose of anesthetic (e.g. chloral hydrate) at 3 and 6 weeks. The specimens (the alveolar bone including the first, second, and third molars of rat maxillary) were collected and fixed in 4% paraformaldehyde for micro-CT scanning (Viva CT40; SCANCO Medical, Bruttisellen, Switzerland) at a resolution of 15 mm, energy of 70 kV, and power of 114 mA. Linear measurements of the alveolar bone loss (ABL) were taken from the cement-enamel junction (CEJ) to the alveolar bone crest (ABC) at the distal and mesial roots of the maxillary first molar in two-dimensional (2-D) micro-CT images (Park Chan et al., ). For the volumetric analysis, the bone volume/tissue volume (BV/TV) parameters were assessed in a 3-D region of interest (ROI) by using the mimics analysis software. In addition, the ROI axially involved a rectangle with the length and width encompassing the entire crown and vertically involving the height from the CEJ to the apex of the maxillary first molar. Then, the first molar was removed and the residual bone volume in the ROI was analyzed. Histology observation The alveolar bone samples were decalcified with 10% ethylenediaminetetraacetic acid (EDTA) for 1.5 months at the room temperature. Then, the sample was dehydrated, embedded in paraffin, and sliced along the mesio-distal direction of the tooth to obtain a 5-μm-thick tooth-periodontal section. The sagittal section was then stained with hematoxylin and eosin (H&E) (Solabao, Beijing, China) to evaluate the pathological conditions of the periodontal tissues. Immunohistochemistry (IHI) The expression of the osteoclastic marker RANK ligand (RANKL) and the osteogenic marker osteogenic growth peptide (OPG) was assessed at 3 and 6 weeks. The IHI-stained images were analyzed and the values of average optical density (AOD) of the images were measured for quantitative analysis by the ImagePro Plus 6.0 software. Statistical analyses All data were analyzed by the SPSS 20 software, and the data were expressed as the mean ± standard deviation. One-way analysis of variance (ANOVA), followed by Student–Newman–Keuls test, was employed to determine the statistical significance, with p < 0.05 considered to be statistically significant. Results 3.1. Characteristics of MINO-microspheres and MINO-M-SAIB hybrid depots In present study, the PLGA/PEG microspheres were prepared with different drug loading concentrations of MINO (0%, 10%, 12%, and 14% w/w relative to PLGA). For simplicity, the MINO-microspheres have been abbreviated as M (0%), M1 (10%), M2 (12%), and M3 (14%), respectively, in this study. demonstrates the SEM images of the electrosprayed microspheres (including M, M1, M2, and M3). The MINO-microspheres were almost monodispersed and spherical in shape, and their surface morphology was slightly rough. The diameters distribution of MINO-microspheres are depicted in , while documents their diameter. The CV of M, M1, M2, and M3 were 8.43%, 7.88%, 9.72%, and 9.09%, and the diameter were 5.385 ± 0.454 µm, 5.429 ± 0.428 µm, 5.297 ± 0.515 µm, 5.354 ± 0.487 µm, respectively. Among the microspheres with different MINO loading capacities, the differences in the diameter were not statistically significant ( p > .05). depicts that the drug loading of M1, M2, and M3 were 8.71 ± 0.012%, 11.148 ± 0.077%, and 12.64 ± 0.03%, which accounted for 87.01%, 92.89%, and 90.31% of theoretical drug loading, respectively. Furthermore, the encapsulation efficiency of M1, M2, and M3 were 65.57 ± 3.07%, 57.24 ± 1.45%, and 48.04 ± 3.24%, which demonstrated that the encapsulation efficiency decreased with increasing MINO loading capacity in the microspheres ( p < .05). Following the decrease in the value of contact angle, the material becomes more hydrophilic. As demonstrated in , with increasing the amount of MINO in the microspheres, the contact angle keeps reducing with the contact angle of M0, M1, M2, and M3 as 93.777 ± 0.303°, 91.76 ± 0.1°, 90 ± 0.25°, and 86.107 ± 0.487°, respectively . These results illustrate that the hydrophilicity of the micospheres increased with increasing concentration of MINO ( p < .05). Laser confocal microscopy was employed to visualize the distribution of MINO in microspheres. In , the microspheres with different drug loading capacities are presented with circular yellow-green signals, depicting a relatively uniform distribution of MINO. In addition, when the theoretical drug loading of the electrosprayed microspheres increased from 12 to 14%, the fluorescence density of the microspheres began to increase, and stronger fluorescence density was concentrated on microspheres surface, which represented that the amount of MINO in the microspheres and the amount of MINO on the microspheres surface increased as the MINO loading. DSC analysis was performed on MINO-loaded microspheres as well as MINO, PLGA and PEG . The DSC curve for PLGA showed a small endothermic peak at 57.31 °C and PEG showed an endothermic peak at 64.4 °C for its melting. The MINO thermal analysis revealed two endothermic peaks at 186.48 °C and 197 °C, approximately followed by its degradation. Moreover, no obvious endothermic peak was observed in the MINO-loaded microspheres, which indicated that the thermal stability of the microspheres is improved when compared with the raw materials. Moreover, the MINO peaks were not visualized in the MINO-loaded microspheres indicating its change from the crystalline form to the amorphous form. The FTIR spectra of MINO-microspheres, MINO-M-SAIB hybrid depot, and their constitution are demonstrated in . The absorption bands of PEG emerged at 2880 cm −1 and 1465 cm −1 , which can be a result of –CH2 stretching and bending vibrations, respectively, while the C–O–C stretching vibrations (1094 cm −1 ) were observed in PEG (Ebadi et al., ). The bands of –CH2 and –CH3 stretching vibrations (2950 cm −1 ), the –COOH stretching vibrations (1747 cm −1 ), and the C–O–C stretching vibrations (1082 cm −1 ) were demonstrated in the spectra of PLGA, which agreed with previously published data (Fu et al., ). The two peaks of –CH3 stretching vibrations (2976 cm −1 ) and C = O stretching vibrations (1739 cm −1 ) were detected in SAIB. The characteristic peaks at 1649 cm −1 and 1581 cm −1 , due to the stretching vibrations of C = C on the benzene ring and the skeleton vibrations of benzene ring respectively, were testified in the spectra of MINO. In addition, the spectra of MINO exhibited multiple complex absorption peaks in the range of 500–1400 cm −1 , as a consequence of the four benzene rings in the molecular structure of MINO. The absorption peaks at 1747 cm −1 , 1650 cm −1 and 1600 cm −1 , owing to the –COOH stretching vibrations of PLGA and the benzene ring vibrations in MINO, were detected in the spectra of MINO-microspheres, which manifested that the MINO was encapsulated into the microspheres. Nevertheless, the characteristic peaks of PEG were not clearly displayed in the spectra of microspheres, which may be involved with the fact that the main absorption peaks of PEG partially coincide with those of PLGA and MINO. Furthermore, compared with the MINO-microspheres, there was no new absorption peak in the spectra of MINO-M-SAIB, which demonstrated that the combination mode of SAIB and MINO-microspheres belonged to physical blending. 3.2. In vitro release of MINO-microspheres and MINO-M-SAIB hybrid depots depicts the release curves of MINO-microspheres (M1, M2, and M3). On the first day, a serious burst release (>65%) was observed in all MINO-microspheres. The cumulative release from M1 was >75%, and the amount of release from M2 and M3 was >80% after 4 days, after which the release patterns of MINO-microspheres were featured by a steady release rate (approximately 2.7% every day) until the 15th day. Finally, the amount of cumulative release was nearly 90% from all groups on the 15th day. The in vitro release profiles from MINO-M-SAIB and MINO-SAIB depots are exhibited in . After the MINO-microspheres (M1, M2, and M3) were dispersed into the SAIB solution to form hybrid depots, the initial burst release decreased significantly from 66.18 to 2.92%, from 71.82 to 3.82%, and from 73.75 to 4.45% on the first day, respectively. Nevertheless, an initial burst release (of up to 38.63%) continued to be displayed in MINO-SAIB depot. Over the first 10 days, the MINO-SAIB and MINO-M-SAIB (i.e. M1-SAIB, M2-SAIB, and M3-SAIB) depots demonstrated fast drug release rate with a cumulative release rate of 58.3%, 17.06%, 18.57%, and 20.7% on the 10th day, respectively. After 10 days, the release profiles of the depots were all featured by a sustained rate (of >0.38% per day) until the 77th day. Some mathematical models have been found to be acceptable for the analysis of drug release, such as the zero order (equation: Q = a + K 0 t), first-order (equation: Q = a(1 − e −k 1 t )), Higuchi (equation: Q = a + K H t 1/2 ), and Ritger-Peppas (equation: Q = K R t n ) models (Ritger & Peppas, ; Cai et al., ; Haroosh et al., ). In our study, the experimental data of drug release were fitted by these four kinetic models to better understand the release mechanism; presents the obtained model parameters. The Ritger-Peppas equation showed high R 2 value (R 2 > .99) to all kinetic data, which represented the best correlation with the release data. Therefore, the Ritger-Peppas equation was applied to analyze the MINO release from depots, the acceptable regression coefficients and the slopes, and the degree of correlation and drug release rate of different depots, respectively, are all represented in . Meanwhile, the linear fits of the MINO release profiles revealed the existence of two release stages for the depots. 3.3. In vitro degradation of MINO-microspheres As shown in , the degradation behavior of MINO-microspheres (i.e. M, M1, M2, and M3) were reported, and the linear fits of the MINO-microspheres degradation profiles demonstrated that M involved 2 degradation stages, while M1, M2, and M3 involved three degradation stages . According to pseudo-first-order kinetics (Siepmann et al., ), the degradation curves of the microspheres were good-fitted, which was reflected by the acceptable regression coefficients, and the slope represented the microspheres degradation rate . In addition, during the first 7 days, M3, M2, and M1 exhibited faster degradation rate than M, which can mainly be attributed to the drug release amount from the MINO-microspheres. From days 7 to 90, the degradation rates of all microspheres (concluding M, M1, M2, and M3) were almost the same and the amount of degradation was approximately 66.67%, indicating that the concentration of MINO was irrelevant to the degradation of the microspheres ( p > .05). Notably, the loss of weight of microspheres was accelerated after 45 days. Finally, the degradation amount of all microspheres (including M, M1, M2, and M3) were 79.3%, 82%, 83%, and 72.7% until the 90th day, respectively. 3.4. The porosity of depots The porosity profiles of different depots are demonstrated in . From days 2 to 45, the porosity of the MINO-M-SAIB hybrid depots (including M1-SAIB, M2-SAIB, M3-SAIB, and M-SAIB) was nearly consistent ( p > .05), but always higher than that of the MINO-SAIB depot. Moreover, the change rates of porosity of the MINO-M-SAIB hybrid depots were greater than that of the MINO-SAIB depot at all time points. The porosity of all groups increased at a quick rate in the first 15 days, but remained steady from days 15 to 45. 3.5. Cytotoxicity of the drug delivery systems on osteoblastic cells The cytotoxicity of different extracts from depots and MINO-microspheres was analyzed by the CCK-8 assay. As shown in , when compared with the control, the five extracts of M1-SAIB, M2-SAIB, M3-SAIB, MINO-SAIB, and M1 were found to promote the proliferation of osteoblasts to a greater extent, while the M2 and M3 groups demonstrated a slight cytotoxicity. Moreover, no significant difference was evident between the M-SAIB and control groups. Interestingly, the differences between these groups (including M1-SAIB, M2-SAIB, M3-SAIB, and M3) and the control group were statistically significant. Generally, the results revealed that minocycline could promote the proliferation of osteoblasts in a certain concentration range, while, on the contrary, a high concentration of minocycline could inhibit the proliferation of osteoblasts. These findings cumulatively suggest that M2-SAIB mostly potentiate osteoblast cell growth; hence, M2-SAIB was used in animal experiments in the present research. 3.6. In vivo studies 3.6.1. Micro-CT findings When compared to the ligation and ligation + M-SAIB groups, an obvious increase was noted in the alveolar crest height in the ligation + MINO-M-SAIB and Periocline ® groups at 3 and 6 weeks, as reflected in the 2-D and 3-D micro-CT images of maxillary first molar . As presented in the study , the results of volumetric bone loss and linear bone loss, as reflected by BV/TV and ABL, all demonstrated a significant preventive effect on the bone loss caused by periodontitis for the ligation + MINO-M-SAIB and Periocline ® groups at 3 and 6 weeks when compared with the ligation and ligation + M-SAIB groups ( p < .05). In addition, in the ligation, ligation + M-SAIB, ligation + MINO-M-SAIB, and Periocline ® groups, the respective ABL values were 1.464 ± 0.035 mm, 1.489 ± 0.024 mm, 1.038 ± 0.058 mm, and 1.033 ± 0.05 mm, respectively, at 3 weeks, and 1.316 ± 0.03 mm, 1.313 ± 0.071 mm, 0.858 ± 0.035 mm, and 0.876 ± 0.05 mm, at 6 weeks, while the ABL value in the control group was 0.527 ± 0.025 mm. At 3 and 6 weeks, when compared to that of the Periocline ® group, a slightly greater improvement was noted in the volumetric bone loss and linear bone loss in the MINO-M-SAIB group ( p > .05). Cumulatively, the MINO-M-SAIB hybrid depot showed a significant preventive effect in bone loss in the periodontitis patients. 3.6.2. Pharmacodynamic outcomes As is exhibited in , at the baseline (0 week), the redness, bleeding, and swelling of the gingival area around the maxillary first molar were obviously noted in all groups; over a period of time, gingival swelling and bleeding were significantly improved in the MINO-M-SAIB and Periocline ® groups, although no significant improvement was noted in the ligation and ligation-M-SAIB groups. The values of GI and PD for the maxillary first molar at different observation time points are listed in . From 0 to 6 weeks, the PD values of all groups decreased over a period of time and were significantly different from those of the control group ( p < .05); however, the MINO-M-SAIB and Periocline ® groups indicated lower PD value than the ligation and ligation + M-SAIB groups at all time points ( p < .05). Moreover, the GI value in the MINO-M-SAIB and Periocline ® groups decreased gradually during 0–6 weeks, albeit it remained high in the ligation and ligation + M-SAIB groups. Based on the quantification of clinical periodontal parameters, the MINO-M-SAIB hybrid depot demonstrated a good anti-inflammatory efficiency on the animal model of ligature-induced periodontitis. 3.6.3. Histological observations From the H&E stained sections , when compared to the normal periodontal tissues around the maxillary first molar in the control group, an obvious periodontal pocket created as a result of the proliferation of epithelial root, obvious inflammatory infiltration, and a significantly absorbed alveolar bone were noted in the ligation and ligation + M-SAIB groups. On the contrary, in the Periocline ® and ligation + MINO-M-SAIB groups, the gingival junction epithelium was re-attached to CEJ and a significantly increased alveolar bone height were recorded at 3 and 6 weeks. 3.6.4. IHI analyses As shown in , when compared to the ligation and ligation + SAIB groups, the expression of OPG protein significantly increased and the expression of RANKL significantly decreased at 3 and 6 weeks in the Periocline ® and ligation + MINO-M-SAIB groups, while the expressions of OPG and RANKL in the control group were the lowest among all compared groups. In addition, this difference was also evident in the quantitative analysis of the expressions of OPG and RANKL . Characteristics of MINO-microspheres and MINO-M-SAIB hybrid depots In present study, the PLGA/PEG microspheres were prepared with different drug loading concentrations of MINO (0%, 10%, 12%, and 14% w/w relative to PLGA). For simplicity, the MINO-microspheres have been abbreviated as M (0%), M1 (10%), M2 (12%), and M3 (14%), respectively, in this study. demonstrates the SEM images of the electrosprayed microspheres (including M, M1, M2, and M3). The MINO-microspheres were almost monodispersed and spherical in shape, and their surface morphology was slightly rough. The diameters distribution of MINO-microspheres are depicted in , while documents their diameter. The CV of M, M1, M2, and M3 were 8.43%, 7.88%, 9.72%, and 9.09%, and the diameter were 5.385 ± 0.454 µm, 5.429 ± 0.428 µm, 5.297 ± 0.515 µm, 5.354 ± 0.487 µm, respectively. Among the microspheres with different MINO loading capacities, the differences in the diameter were not statistically significant ( p > .05). depicts that the drug loading of M1, M2, and M3 were 8.71 ± 0.012%, 11.148 ± 0.077%, and 12.64 ± 0.03%, which accounted for 87.01%, 92.89%, and 90.31% of theoretical drug loading, respectively. Furthermore, the encapsulation efficiency of M1, M2, and M3 were 65.57 ± 3.07%, 57.24 ± 1.45%, and 48.04 ± 3.24%, which demonstrated that the encapsulation efficiency decreased with increasing MINO loading capacity in the microspheres ( p < .05). Following the decrease in the value of contact angle, the material becomes more hydrophilic. As demonstrated in , with increasing the amount of MINO in the microspheres, the contact angle keeps reducing with the contact angle of M0, M1, M2, and M3 as 93.777 ± 0.303°, 91.76 ± 0.1°, 90 ± 0.25°, and 86.107 ± 0.487°, respectively . These results illustrate that the hydrophilicity of the micospheres increased with increasing concentration of MINO ( p < .05). Laser confocal microscopy was employed to visualize the distribution of MINO in microspheres. In , the microspheres with different drug loading capacities are presented with circular yellow-green signals, depicting a relatively uniform distribution of MINO. In addition, when the theoretical drug loading of the electrosprayed microspheres increased from 12 to 14%, the fluorescence density of the microspheres began to increase, and stronger fluorescence density was concentrated on microspheres surface, which represented that the amount of MINO in the microspheres and the amount of MINO on the microspheres surface increased as the MINO loading. DSC analysis was performed on MINO-loaded microspheres as well as MINO, PLGA and PEG . The DSC curve for PLGA showed a small endothermic peak at 57.31 °C and PEG showed an endothermic peak at 64.4 °C for its melting. The MINO thermal analysis revealed two endothermic peaks at 186.48 °C and 197 °C, approximately followed by its degradation. Moreover, no obvious endothermic peak was observed in the MINO-loaded microspheres, which indicated that the thermal stability of the microspheres is improved when compared with the raw materials. Moreover, the MINO peaks were not visualized in the MINO-loaded microspheres indicating its change from the crystalline form to the amorphous form. The FTIR spectra of MINO-microspheres, MINO-M-SAIB hybrid depot, and their constitution are demonstrated in . The absorption bands of PEG emerged at 2880 cm −1 and 1465 cm −1 , which can be a result of –CH2 stretching and bending vibrations, respectively, while the C–O–C stretching vibrations (1094 cm −1 ) were observed in PEG (Ebadi et al., ). The bands of –CH2 and –CH3 stretching vibrations (2950 cm −1 ), the –COOH stretching vibrations (1747 cm −1 ), and the C–O–C stretching vibrations (1082 cm −1 ) were demonstrated in the spectra of PLGA, which agreed with previously published data (Fu et al., ). The two peaks of –CH3 stretching vibrations (2976 cm −1 ) and C = O stretching vibrations (1739 cm −1 ) were detected in SAIB. The characteristic peaks at 1649 cm −1 and 1581 cm −1 , due to the stretching vibrations of C = C on the benzene ring and the skeleton vibrations of benzene ring respectively, were testified in the spectra of MINO. In addition, the spectra of MINO exhibited multiple complex absorption peaks in the range of 500–1400 cm −1 , as a consequence of the four benzene rings in the molecular structure of MINO. The absorption peaks at 1747 cm −1 , 1650 cm −1 and 1600 cm −1 , owing to the –COOH stretching vibrations of PLGA and the benzene ring vibrations in MINO, were detected in the spectra of MINO-microspheres, which manifested that the MINO was encapsulated into the microspheres. Nevertheless, the characteristic peaks of PEG were not clearly displayed in the spectra of microspheres, which may be involved with the fact that the main absorption peaks of PEG partially coincide with those of PLGA and MINO. Furthermore, compared with the MINO-microspheres, there was no new absorption peak in the spectra of MINO-M-SAIB, which demonstrated that the combination mode of SAIB and MINO-microspheres belonged to physical blending. In vitro release of MINO-microspheres and MINO-M-SAIB hybrid depots depicts the release curves of MINO-microspheres (M1, M2, and M3). On the first day, a serious burst release (>65%) was observed in all MINO-microspheres. The cumulative release from M1 was >75%, and the amount of release from M2 and M3 was >80% after 4 days, after which the release patterns of MINO-microspheres were featured by a steady release rate (approximately 2.7% every day) until the 15th day. Finally, the amount of cumulative release was nearly 90% from all groups on the 15th day. The in vitro release profiles from MINO-M-SAIB and MINO-SAIB depots are exhibited in . After the MINO-microspheres (M1, M2, and M3) were dispersed into the SAIB solution to form hybrid depots, the initial burst release decreased significantly from 66.18 to 2.92%, from 71.82 to 3.82%, and from 73.75 to 4.45% on the first day, respectively. Nevertheless, an initial burst release (of up to 38.63%) continued to be displayed in MINO-SAIB depot. Over the first 10 days, the MINO-SAIB and MINO-M-SAIB (i.e. M1-SAIB, M2-SAIB, and M3-SAIB) depots demonstrated fast drug release rate with a cumulative release rate of 58.3%, 17.06%, 18.57%, and 20.7% on the 10th day, respectively. After 10 days, the release profiles of the depots were all featured by a sustained rate (of >0.38% per day) until the 77th day. Some mathematical models have been found to be acceptable for the analysis of drug release, such as the zero order (equation: Q = a + K 0 t), first-order (equation: Q = a(1 − e −k 1 t )), Higuchi (equation: Q = a + K H t 1/2 ), and Ritger-Peppas (equation: Q = K R t n ) models (Ritger & Peppas, ; Cai et al., ; Haroosh et al., ). In our study, the experimental data of drug release were fitted by these four kinetic models to better understand the release mechanism; presents the obtained model parameters. The Ritger-Peppas equation showed high R 2 value (R 2 > .99) to all kinetic data, which represented the best correlation with the release data. Therefore, the Ritger-Peppas equation was applied to analyze the MINO release from depots, the acceptable regression coefficients and the slopes, and the degree of correlation and drug release rate of different depots, respectively, are all represented in . Meanwhile, the linear fits of the MINO release profiles revealed the existence of two release stages for the depots. In vitro degradation of MINO-microspheres As shown in , the degradation behavior of MINO-microspheres (i.e. M, M1, M2, and M3) were reported, and the linear fits of the MINO-microspheres degradation profiles demonstrated that M involved 2 degradation stages, while M1, M2, and M3 involved three degradation stages . According to pseudo-first-order kinetics (Siepmann et al., ), the degradation curves of the microspheres were good-fitted, which was reflected by the acceptable regression coefficients, and the slope represented the microspheres degradation rate . In addition, during the first 7 days, M3, M2, and M1 exhibited faster degradation rate than M, which can mainly be attributed to the drug release amount from the MINO-microspheres. From days 7 to 90, the degradation rates of all microspheres (concluding M, M1, M2, and M3) were almost the same and the amount of degradation was approximately 66.67%, indicating that the concentration of MINO was irrelevant to the degradation of the microspheres ( p > .05). Notably, the loss of weight of microspheres was accelerated after 45 days. Finally, the degradation amount of all microspheres (including M, M1, M2, and M3) were 79.3%, 82%, 83%, and 72.7% until the 90th day, respectively. The porosity of depots The porosity profiles of different depots are demonstrated in . From days 2 to 45, the porosity of the MINO-M-SAIB hybrid depots (including M1-SAIB, M2-SAIB, M3-SAIB, and M-SAIB) was nearly consistent ( p > .05), but always higher than that of the MINO-SAIB depot. Moreover, the change rates of porosity of the MINO-M-SAIB hybrid depots were greater than that of the MINO-SAIB depot at all time points. The porosity of all groups increased at a quick rate in the first 15 days, but remained steady from days 15 to 45. Cytotoxicity of the drug delivery systems on osteoblastic cells The cytotoxicity of different extracts from depots and MINO-microspheres was analyzed by the CCK-8 assay. As shown in , when compared with the control, the five extracts of M1-SAIB, M2-SAIB, M3-SAIB, MINO-SAIB, and M1 were found to promote the proliferation of osteoblasts to a greater extent, while the M2 and M3 groups demonstrated a slight cytotoxicity. Moreover, no significant difference was evident between the M-SAIB and control groups. Interestingly, the differences between these groups (including M1-SAIB, M2-SAIB, M3-SAIB, and M3) and the control group were statistically significant. Generally, the results revealed that minocycline could promote the proliferation of osteoblasts in a certain concentration range, while, on the contrary, a high concentration of minocycline could inhibit the proliferation of osteoblasts. These findings cumulatively suggest that M2-SAIB mostly potentiate osteoblast cell growth; hence, M2-SAIB was used in animal experiments in the present research. In vivo studies 3.6.1. Micro-CT findings When compared to the ligation and ligation + M-SAIB groups, an obvious increase was noted in the alveolar crest height in the ligation + MINO-M-SAIB and Periocline ® groups at 3 and 6 weeks, as reflected in the 2-D and 3-D micro-CT images of maxillary first molar . As presented in the study , the results of volumetric bone loss and linear bone loss, as reflected by BV/TV and ABL, all demonstrated a significant preventive effect on the bone loss caused by periodontitis for the ligation + MINO-M-SAIB and Periocline ® groups at 3 and 6 weeks when compared with the ligation and ligation + M-SAIB groups ( p < .05). In addition, in the ligation, ligation + M-SAIB, ligation + MINO-M-SAIB, and Periocline ® groups, the respective ABL values were 1.464 ± 0.035 mm, 1.489 ± 0.024 mm, 1.038 ± 0.058 mm, and 1.033 ± 0.05 mm, respectively, at 3 weeks, and 1.316 ± 0.03 mm, 1.313 ± 0.071 mm, 0.858 ± 0.035 mm, and 0.876 ± 0.05 mm, at 6 weeks, while the ABL value in the control group was 0.527 ± 0.025 mm. At 3 and 6 weeks, when compared to that of the Periocline ® group, a slightly greater improvement was noted in the volumetric bone loss and linear bone loss in the MINO-M-SAIB group ( p > .05). Cumulatively, the MINO-M-SAIB hybrid depot showed a significant preventive effect in bone loss in the periodontitis patients. 3.6.2. Pharmacodynamic outcomes As is exhibited in , at the baseline (0 week), the redness, bleeding, and swelling of the gingival area around the maxillary first molar were obviously noted in all groups; over a period of time, gingival swelling and bleeding were significantly improved in the MINO-M-SAIB and Periocline ® groups, although no significant improvement was noted in the ligation and ligation-M-SAIB groups. The values of GI and PD for the maxillary first molar at different observation time points are listed in . From 0 to 6 weeks, the PD values of all groups decreased over a period of time and were significantly different from those of the control group ( p < .05); however, the MINO-M-SAIB and Periocline ® groups indicated lower PD value than the ligation and ligation + M-SAIB groups at all time points ( p < .05). Moreover, the GI value in the MINO-M-SAIB and Periocline ® groups decreased gradually during 0–6 weeks, albeit it remained high in the ligation and ligation + M-SAIB groups. Based on the quantification of clinical periodontal parameters, the MINO-M-SAIB hybrid depot demonstrated a good anti-inflammatory efficiency on the animal model of ligature-induced periodontitis. 3.6.3. Histological observations From the H&E stained sections , when compared to the normal periodontal tissues around the maxillary first molar in the control group, an obvious periodontal pocket created as a result of the proliferation of epithelial root, obvious inflammatory infiltration, and a significantly absorbed alveolar bone were noted in the ligation and ligation + M-SAIB groups. On the contrary, in the Periocline ® and ligation + MINO-M-SAIB groups, the gingival junction epithelium was re-attached to CEJ and a significantly increased alveolar bone height were recorded at 3 and 6 weeks. 3.6.4. IHI analyses As shown in , when compared to the ligation and ligation + SAIB groups, the expression of OPG protein significantly increased and the expression of RANKL significantly decreased at 3 and 6 weeks in the Periocline ® and ligation + MINO-M-SAIB groups, while the expressions of OPG and RANKL in the control group were the lowest among all compared groups. In addition, this difference was also evident in the quantitative analysis of the expressions of OPG and RANKL . Micro-CT findings When compared to the ligation and ligation + M-SAIB groups, an obvious increase was noted in the alveolar crest height in the ligation + MINO-M-SAIB and Periocline ® groups at 3 and 6 weeks, as reflected in the 2-D and 3-D micro-CT images of maxillary first molar . As presented in the study , the results of volumetric bone loss and linear bone loss, as reflected by BV/TV and ABL, all demonstrated a significant preventive effect on the bone loss caused by periodontitis for the ligation + MINO-M-SAIB and Periocline ® groups at 3 and 6 weeks when compared with the ligation and ligation + M-SAIB groups ( p < .05). In addition, in the ligation, ligation + M-SAIB, ligation + MINO-M-SAIB, and Periocline ® groups, the respective ABL values were 1.464 ± 0.035 mm, 1.489 ± 0.024 mm, 1.038 ± 0.058 mm, and 1.033 ± 0.05 mm, respectively, at 3 weeks, and 1.316 ± 0.03 mm, 1.313 ± 0.071 mm, 0.858 ± 0.035 mm, and 0.876 ± 0.05 mm, at 6 weeks, while the ABL value in the control group was 0.527 ± 0.025 mm. At 3 and 6 weeks, when compared to that of the Periocline ® group, a slightly greater improvement was noted in the volumetric bone loss and linear bone loss in the MINO-M-SAIB group ( p > .05). Cumulatively, the MINO-M-SAIB hybrid depot showed a significant preventive effect in bone loss in the periodontitis patients. Pharmacodynamic outcomes As is exhibited in , at the baseline (0 week), the redness, bleeding, and swelling of the gingival area around the maxillary first molar were obviously noted in all groups; over a period of time, gingival swelling and bleeding were significantly improved in the MINO-M-SAIB and Periocline ® groups, although no significant improvement was noted in the ligation and ligation-M-SAIB groups. The values of GI and PD for the maxillary first molar at different observation time points are listed in . From 0 to 6 weeks, the PD values of all groups decreased over a period of time and were significantly different from those of the control group ( p < .05); however, the MINO-M-SAIB and Periocline ® groups indicated lower PD value than the ligation and ligation + M-SAIB groups at all time points ( p < .05). Moreover, the GI value in the MINO-M-SAIB and Periocline ® groups decreased gradually during 0–6 weeks, albeit it remained high in the ligation and ligation + M-SAIB groups. Based on the quantification of clinical periodontal parameters, the MINO-M-SAIB hybrid depot demonstrated a good anti-inflammatory efficiency on the animal model of ligature-induced periodontitis. Histological observations From the H&E stained sections , when compared to the normal periodontal tissues around the maxillary first molar in the control group, an obvious periodontal pocket created as a result of the proliferation of epithelial root, obvious inflammatory infiltration, and a significantly absorbed alveolar bone were noted in the ligation and ligation + M-SAIB groups. On the contrary, in the Periocline ® and ligation + MINO-M-SAIB groups, the gingival junction epithelium was re-attached to CEJ and a significantly increased alveolar bone height were recorded at 3 and 6 weeks. IHI analyses As shown in , when compared to the ligation and ligation + SAIB groups, the expression of OPG protein significantly increased and the expression of RANKL significantly decreased at 3 and 6 weeks in the Periocline ® and ligation + MINO-M-SAIB groups, while the expressions of OPG and RANKL in the control group were the lowest among all compared groups. In addition, this difference was also evident in the quantitative analysis of the expressions of OPG and RANKL . Discussion For the treatment of periodontitis, the success of the local application of MINO mainly depends on its sustained release and appropriate concentration in the periodontal pockets (Gibson et al., ). Consequently, the present study mainly focused on the preparation of the MINO-M-SAIB hybrid depot that can ensure both a sustained release and an appropriate concentration of the antibiotic. The MINO-M-SAIB hybrid depot was expected to achieve antibacterial activity while also promoting new bone formation. For the microspheres, the composition of the MINO-M-SAIB hybrid depot, their surface morphology was slightly rough, which may be because the solvent did not totally evaporate before falling down to the aluminum foil during electrospray (Yao et al., ). Given that particles with a rough surface enhance cell adhesion to facilitate cellular internalization (Chen et al., ), the surface morphology of the microspheres in our study was favorable to the attachment of osteoblasts to the microparticles. Furthermore, the electrosprayed microspheres had a narrow size distribution and relatively uniform drug distribution, consistent with Yao ′ s study (Yao et al., ). With increase in the theoretical drug loading, the actual drug loading of the electrosprayed microspheres increased, which achieved an agreement with previous studies (Yang et al., ). However, the ratio of actual drug loading to theoretical drug loading in the electrosprayed microspheres began to decrease when the theoretical drug loading increased from 12 to 14%, which was probably because of the limited ability of PLGA microspheres to carry drugs. Moreover, for the microspheres with different drug loading, both the results of encapsulation efficiency and the laser confocal images of the drug distribution showed that the amount of MINO on the microsphere surface gradually increased with the increase in the drug loading of the microspheres. The results could be interpreted by the fact that the increase in the concentration of MINO would make it difficult for dissolved MINO to draw toward the center of the droplet as the solvent evaporated; therefore, MINO deposited on the microsphere surface (Hong et al., ). In the in vitro drug release experiment, the electrosprayed microspheres showed an obvious burst release (up to 65%); however, after the MINO-microspheres were loaded into SAIB, the burst release of MINO significantly decreased, which was in agreement with the findings of several studies (Lee et al., ; Wang et al., ; Yang et al., ). For the release behavior of depots, at the early stage from 1 to 10 days, most of the drug concentration in the MINO-M-SAIB hybrid depots was contained in the microspheres; thus, only a small amount of MINO on the microsphere surface may be released into SAIB, resulting in a significant decrease in burst release; meanwhile, an obvious burst release was observed in the MINO-SAIB depot because of a large amount of the drug dissolved in the depot. From 10 to 77 days, the release rate of MINO-SAIB depot was lower than that of the MINO-M-SAIB hybrid depots , which could be mainly attributed to the fewer concentration of the drug left in the MINO-SAIB hybrid depot in the latter stage. Notably, with the decrease in EE of the microspheres (i.e. increase in the amount of MINO on the microsphere surface), the amount of drug burst release in the MINO-M-SAIB hybrid depots increased, which could be related to the increase in the amount of dissolved drug in SAIB from the microsphere surface (Lin et al., ). Considering the release behavior of microspheres in the first 7 days, during which all MINO-microspheres released up to 80%, it can be considered that the amount of microsphere degradation was mainly attributed to the release loss of MINO. At the same time, during the first 7 days, the degradation rates of M1, M2, and M3 were higher than those of the blank microsphere M, which was also because of the release amount of MINO-microspheres. As shown in previous studies, the hydrolytic mechanism is the mechanism by which PLGA microspheres degrade, and during degradation, at the early stage, the increase in surface roughness and surface defects could be observed on the microspheres with gradual penetration of water; correspondingly, at the latter stage, increasing large cavities were formed by water penetration expanded, which induced faster degradation of the microspheres (James et al., ; Xu et al., ). This phenomenon corroborates with the results of our research, indicating that the degradation rate of the microspheres after 45 days was greater than that before 45 days. Additionally, it is worth noting that the porosity of the depots was may be relevant to their release behavior: from 2 to 15 days, the porosity increased rapidly, which corresponded to the rapid drug release in the depots, whereas after 15 days, the porosity increased only slowly, corresponding to the sustained and steady drug release. Notably, although the porosity of the MINO-SAIB depot was lower than that of the MINO-M-SAIB hybrid depots, the drug release rate of the former was greater in the first 10 days, mainly due to more amount of MINO dissolved in the MINO-SAIB depot, thereby verifying that the release behavior of depots is mainly controlled by the amount of drug dissolved in it. Several studies have demonstrated that MINO in a therapeutic concentration range can potentiate osteoblast cell growth and that with a high concentration will inhibit the proliferation of osteoblasts (Almazin et al., ; Calasans-Maia et al., ), and combining the results of the CCK-8 assay with the release behavior of depots and microspheres, the results of the present study also confirm this point. In addition, the minimum inhibitory concentrations of MINO against P. actinomycetemcomitans, Porphyromonas gingivalis , and Treponema were only 0.25 µg/ml, 0.125 µg/ml, and 0.125 µg/ml, respectively (Andrés et al., ; Naoko et al., ; Okamoto-Shibayama et al., ). Therefore, the release behavior of MINO in the present study, which was featured by an initial burst release (about 3%) and a sustained rate (over 0.38% per day) until the 77th day, could not only achieve an effective antibacterial concentration of MINO but also ensure osseointegration. In our in vivo research, the MINO-M-SAIB hybrid depot showed an obvious improved efficiency on ligature-induced periodontitis, as reflected in the results of micro-CT and pharmacodynamic evaluation, which mainly was attributed to the osteogenic and antibacterial ability of MINO and the sustained release of MINO from the MINO-M-SAIB hybrid depot. In addition, at 3 and 6 weeks, there was no significant difference in the results of pharmacodynamic evaluation, bone volume, and bone height between the ligation and ligation + M-SAIB groups, indicating that M-SAIB alone neither damaged the periodontal tissues nor stimulated bone formation. Periocline ® , a bio-absorbable sustained local drug delivery system containing 20 mg/g MINO, has been widely used clinically recently (Yang et al., ); however, the release behavior of MINO from the system exists a marked initial burst release (up to 40%) and only lasts for 7 days (Wang et al., ). As a result, Periocline ® is usually administered by injection into the periodontal pocket once a week, and repeated requests for multiple visits may cause inconvenience to patients and lead to poor compliance with treatment procedures. Moreover, some previous studies have shown that Periocline ® is not conducive to osteogenesis due to large fluctuations of the drug concentration in the periodontal pocket (Vandekerckhove et al., ; Liu et al., ). However, in our study, MINO-M-SAIB, a carrier with a sustained and long-time release of MINO, achieved the same treatment effect of periodontal inflammation as Periocline ® , as well as a slightly greater osteogenesis effect than Periocline ® , as reflected by the results of micro-CT and pharmacodynamics evaluation at 6 weeks. Finally, it is worth noting that in our previous study, the 2% MINO-PLGA membrane was still characterized by an initial burst release of 20% and its osteogenesis effect against periodontitis was studied at 3 and 6 weeks (Ma et al., ); correspondingly, the therapeutic outcomes at 3 and 6 weeks was also evaluated in the present study for comparison with our previous study. The results showed that both MINO-M-SAIB and MINO-PLGA membrane achieved good osteogenic effects, and a better sustained release behavior was observed with the MINO-M-SAIB hybrid depot, while also being injectable, so that it will be easier to place the depots in the periodontal pockets than membranes. The formation of new bone is mainly controlled by osteoblasts and osteoclasts by regulating the expression of RANKL and its decoy receptor, osteoprotegerin (OPG) (Xu et al., ). RANKL binds to its receptor, RANK, which is expressed by osteoclast precursor cells, in order to trigger precursors to differentiate into osteoclasts (Harada & Takahashi, ). Moreover, OPG, which is expressed by osteoblasts, blocks osteoclast activation due to its high-affinity binding to RANKL (Wang et al., ). Therefore, combined with the present results about the expression of OPG and RANKL, it can be suggested that MINO promotes the formation of new bone by downregulating RANKL and upregulating OPG. Conclusions In the present study, the MINO-M-SAIB hybrid depots were successfully prepared and showed a significantly improved controlled release performance of MINO and good anti-inflammatory and osteogenic effects in ligature-induced periodontitis in SD rats. Moreover, the findings indicate that the use of the MINO-M-SAIB hybrid depot can decrease the numbers of patient visits and can be easily placed in the periodontal pocket, as a result of its sustained release pattern and injectability. Taken together, the present study suggests that the MINO-M-SAIB hybrid depot has great potential for clinical application in the treatment of periodontitis.
Frequent inappropriate implantable cardioverter defibrillator therapy was determined to be dual atrioventricular nodal non-reentrant tachycardia
dcb95b1b-1765-46ed-8e8b-95965ad61158
8036121
Physiology[mh]
Introduction Implantation of an implantable cardiac defibrillator (ICD) is an effective method to protect against sudden cardiac death (SCD) in patients; however, inappropriate ICD therapy is common in the real world. Supraventricular tachycardia (SVT), such as atrial fibrillation, is a common cause. Here, we present the first case of inappropriate ICD therapy due to dual atrioventricular node non-reentrant tachycardia (DAVNNRT) in China. DAVNNRT is a rare type of SVT and is often identified as ventricular tachycardia by the supraventricular tachycardia-ventricular tachycardia (SVT-VT) discriminator of the ICD. Case report A 73-year-old man with ischemic heart disease presented with palpitations accompanied by dyspnea and dizziness for almost 1 year. Ambulatory electrocardiography showed frequent multifocal premature ventricular beats and non-sustained VTs. He suffered from syncope and received direct current cardioversion before he came to our center; unfortunately, the electrocardiogram (ECG) was recorded by an external defibrillator and was not preserved. He received emergency percutaneous coronary intervention (PCI) therapy for acute myocardial infarction and received drug-eluting stents in the left circumflex (LCX) artery in another hospital 3 years earlier. Physical examination showed no positive signs on admission. Transthoracic echocardiography revealed an ejection fraction of 32%, with a hypokinetic inferior and inferolateral wall. Hypersensitive troponin I (HsTnI) and brain natriuretic peptide (BNP) levels increased to 0.463 μg/L and 499.18 ng/L, respectively. Subsequently, the patient underwent repeat PCI therapy, and coronary angiography (CAG) showed 80% to 85% stenosis of the left anterior descending (LAD) artery as well as the LCX artery. Consequently, drug-eluting stents were placed in the LAD artery and balloon inflation was successfully performed in the LCX artery. Considering the concurrence of ischemic heart disease (IHD), heart failure with reduced ejection fraction (HFrEF), and syncope due to VT, a single-chamber implantable cardioverter defibrillator (ICD) (Iforia7 VR-T DX Biotronik SE&Co. KG, Berlin, Germany) was implanted for secondary prevention of sudden death. ICD settings were as follows: mode = VDI; VT1/VT2/VF-detection-rate = 167/180/200 beats per minute (BPM); detection/redetection counter = VT1:40/30, and VT2:16/14. Unfortunately, he developed tachycardia. He underwent ICD therapy, including 173 rounds of anti-tachycardia pacing (ATP) and 5 rounds of shocks, in the 4 months following the ICD procedure (Fig. a−d). To avoid further inappropriate ICD therapy, we increased the VT1/VT2 detection rate and prolonged the VT detection time to avoid inappropriate ICD therapy. Additionally, amiodarone (600 mg per day) and metoprolol (23.75 mg per day) were prescribed, but they seemed to have no effect. The AV interval of the first 4 A and V waves in Figure b was approximately 570 ms, as shown in Figure e, which had a long PR interval. We found several series of narrow QRS complexes, which were automatically diagnosed as VT/ventricular fibrillation (VF) by the device, and we performed ATP therapy to terminate the VF (Fig. a). However, no ECG was recorded until the third admission. The ECG on the third admission (Fig. a) showed a narrow QRS complex tachycardia (NCT). The P waves in Figure a had the same morphology and axis as the P waves shown in Figure b, suggesting they were of sinus node origin, and each P wave occurred regularly with 2 QRS complexes following. It was noted that the R1R2 and R2R1 intervals showed the same regular pattern throughout the tracing, and the PR interval (PR1 and PR2) alternated between short and long intervals. Moreover, the second QRS presented with a relatively constant coupling interval after the short PR conducted beat. The morphology of the 2 QRS complexes was slightly different. This regularity indicated that the 2 related QRS complexes were generated by 1 given P wave, and the relationship was illustrated more clearly in an intracardiac electrogram (IEGM) from the ICD (Fig. c). The patient received EPS, and no VT was induced. The NCT mentioned above had a sudden and spontaneous onset with no programmed stimulation (Fig. ). The surface ECG shown at the top of Figures a and b is the same as that in Figure a. A CS 9, 10 catheter clearly recorded 1 high-amplitude atrial potential (marked A), along with 2 low-amplitude ventricular potentials (marked V1V2). As shown in Figure b, the radiofrequency ablation catheter (ABL) was positioned at the His bundle to mark his potential. We found 1 a wave followed by 2 H-V waves: H1-V1 and H2-V2. The AH1 and AH2 intervals were 120 ms and 510 ms, respectively. The latter had a tremendous prolongation compared with the former. The HV interval was fixed, and both H1V1 and H2V2 intervals were 60 ms. Atrial burst stimulation (cycle length = 340 ms) can terminate tachycardia. Atrial S1S2 programmed stimulation (450/400 ms, −10 ms) was applied until the AVN refractory period was reached (450/360 ms). No “jump” phenomena were observed. Right ventricular apex pacing (S 1 S 1 ) with multiple cycle lengths showed no V-A retrograde conduction (Fig. c). The difference between AH1 and AH2 established that atrial excitation conducts down to the ventricle with 2 divided pathways of the AVN, which had pronounced differences in conduction velocities when compared with each other. Retrograde conduction between the ventricle and atrium was absent. We called this type of tachycardia, dual atrioventricular node non-reentrant tachycardia (DAVNNRT). The middle panel of Figure d shows the electrophysiological mechanism of this tachycardia. Atrial beats being conducted to the ventricle along a slow pathway might explain the long AV and PR intervals shown in Fig. b and e because the AV intervals had approximately the same length as the AV 2 intervals, as indicated in the EPS (Fig. b). Radiofrequency ablation of the slow pathway (Fig. e) terminated this tachycardia successfully, and the double ventricular response disappeared. We performed the EPS experiment again: ventricular pacing showed no V-A retrograde conduction, and programmed stimulation at the high right atrium as well as the CS7, 8 catheter could not induce tachycardia, which was the same as an intravenous drip of isoprenaline. No tachycardia occurred spontaneously. There was no inappropriate ICD therapy or tachycardia during follow-up, and the IEGM from the ICD showed normal AV intervals (Supplemental file, http://links.lww.com/MD2/A47 , and Supplementary data for reviewer http://links.lww.com/MD2/A48 ). Discussion and conclusion Here, we report a case of inappropriate ICD therapy due to DAVNNRT, which was undiagnosed before ICD implantation. ICDs are effective in protecting patients against sudden cardiac death (SCD), particularly in patients with VT and heart failure due to ischemic cardiomyopathy. However, inappropriate ICD therapy, especially inappropriate shock, is common in the real world. It has a significant morbidity rate and the potential to trigger ventricular arrhythmias, leading to cardiac decompensation and death. In the MADIT II experiment, these phenomena occurred in approximately 11.5% of patients. The most common cause of supraventricular tachycardia (SVT) is atrial fibrillation (44%), followed by SVT (36%). To our knowledge, this is the first case of inappropriate ICD therapy due to DAVNNRT in a patient with IHD and HFrEF in China. Many methods for minimizing inappropriate ICD therapy have been developed, such as the implantation of ICDs with an SVT-VT discrimination function, home monitoring, atrial sensing, and dual-chamber ICDs. However, in this case, the ICD identified DAVNNRT as VT because atrial sensing indicated a relationship between the atrium and ventricle as V > A, ultimately resulting in inappropriate ICD therapy. Common types of NCT include sinus tachycardia, atrial fibrillation/flutter/tachycardia, atrioventricular node reentrant tachycardia (AVNRT), and atrioventricular reentrant tachycardia (AVRT). DAVNNRT is not a common type of NCT, and it was first described by Wu et al. The presence of imbalanced electrophysiological properties of the slow/fast pathway in the AVN usually generates AVNRT. In rare situations, supraventricular beats can occur simultaneously with fast and slow pathways, which generates 1:2 AV conduction and causes DAVNNRT. The decisive conditions for this arrhythmia are as follows: changes in sinus excitation, occurrence of atrial and ventricular premature beats, and the need for the electrophysiological characteristics (including conduction velocity, refractory period, and backward conduction) of the 2 pathways and the distal common pathway to differ. The middle panel of Figure d shows the tachycardia-building process. These conditions are affected by the autonomic nervous system and various drugs in most cases; therefore, they rarely occur in the real world. Peiker et al have accounted for the electrophysiological characteristics of DAVNNRT in a review of 68 cases from 1995 to 2014. The authors indicated that the most significant indication of DAVNNT on ECG is a P wave followed by 2 narrow QRS complexes. Because it is not widely known, DAVNNRT may be diagnosed as atrial fibrillation, atrial premature beats, or other SVTs, especially since dual AV nodal conduction may be intermittent. Bigeminal junctional ectopy could be another arrhythmogenic mechanism for this, but it usually shows more irregular variations in the coupling interval with the previous sinus beat. DAVNNRT has a fixed H-V interval and a slight variation in the R1R2 interval. Due to the variable conduction of the slow and fast pathways, slight changes in QRS morphology may be detected. As recently described by De Ponti et al, the different inputs into the bundle of His from the fast and slow pathways, suggesting the longitudinal dissociation of the distal AVN extending to the bundle of His (referred to as Zhang's phenomenon (or His electrogram alternans)), potentially explains the different QRS complex morphologies. Cardiovascular diseases, such as IHD, could cause changes in the extent and heterogeneity of structural discontinuities. The variability of cardiac cycle time and ventricular wall tension between the 2 ventricular beats may also contribute to the minimal difference between the 2 QRS complexes in this patient. The difference between 2 AH intervals ranged from 265 to 520 ms (359 ± 46 ms), suggesting the different electrophysiological characteristics of the 2 pathways, and the slow pathway conduction was slow enough to allow the His-Purkinje system to recover excitability after being depolarized by the first excitation over the fast pathway. The retrograde conduction between the ventricle and atrium was weak or absent, as has been observed in previous studies. As described by Rivner et al article, supraventricular beats can only conduct down the fast pathway, both the fast and slow AV nodal pathways, and the slow pathway only. Of note, as with this patient, atrial beats at a rate of 100 BPM are conducted along the slow pathway (Fig. e). In contrast, as shown in Figure b, atrial beats at a rate of 50 BPM are conducted along the fast pathway, which is probably due to the different electrophysiological characteristics of dual AVN pathways, the presence of concealed retrograde conduction, and the refractory period of the distal bundle of His. This finding is illustrated in the left and right panels of Figure d. Interestingly, tachycardia and inappropriate ICD therapy mainly occur at an atrial rate between 60 and 100 BPM. It is certain that DAVNNRT can be cured by effective ablation of the slow pathway to suppress its forward conduction and has a good prognosis. In fact, the dual pathway of AVN is common, and it is not necessary to eliminate the slow pathway for all patients only if it generates tachycardia or for patients who undergo implantation of an ICD. In conclusion, DAVNNRT is a rare type of tachycardia that appears as an irregular narrow QRS complex with 1:2 AV conduction, especially discriminated as VT by the SVT-VT discriminator of the ICD. We present a rare reason for inappropriate ICD therapy. It is important for us to have a full understanding of this arrhythmia to avoid misdiagnosis and incorrect treatment. We thank AJE ( https://www.aje.cn ) for its linguistic assistance during the preparation of this manuscript. Clinical data collection and interpretation: Chengming Ma, Shiyu Dai, and Xiaohong Yu. Data curation: Chengming Ma, Shiyu Dai. Editing and revision: Xiaomeng Yin and Yunlong Xia. Figure drafting : Chengming Ma and Wenwen Li. Formal analysis: Xiaohong Yu. Software: Wenwen li. Supervision: Xiaomeng Yin, Yunlong Xia. Writing – original draft: Chengming Ma. Writing – review & editing: Xiaomeng Yin, Yunlong Xia, Lianjun Gao.
Herbert Coddington Major (1850–1921)
f6bc8b2a-a23f-4099-8142-89f728bbcf7d
10973065
Pathology[mh]
Tracing the path of 37,050 studies into practice across 18 specialties of the 2.4 million published between 2011 and 2020
2a515150-e416-439e-b71c-0733b09ac3a5
10115455
Internal Medicine[mh]
Two key elements to the underperformance of national healthcare systems are that: (a) many patients do not receive recommended services and (b) many receive treatment that is neither necessary nor appropriate for them . The Institute of Medicine (IOM) Roundtable on Value and Science-Driven Healthcare argues; however, the challenge is not a matter of overuse (or underuse) of services, but the absence of evidence to assess the appropriateness of treatment approaches . With more than 1 million medical research articles published in the past year alone, the adoption of clinical studies into practice is one critical aspect of this challenge – further compounded by a limited understanding of how the wave of biomedical literature reaches the shores of clinical practice. A few case studies/series have attempted to understand this block in clinical adoption using surrogate markers, such as submission to the Food and Drug Administration (FDA) , number of citations , or incorporation into society-specific clinical guidelines . However, these studies are often too coarse and indirect for a real-time and practical understanding of how clinicians read, synthesize, and integrate the literature into their everyday practice. Furthermore, these studies often conflate translation of basic science with translation of clinical studies to practice, which the IOM has identified as two separate and distinct translational blocks . In addition, using citation in consensus documents or society recommendations is too slow and often limited in scope to provide answers to the questions defined here. Focusing on the translation of clinical studies into practice, we capitalize on the electronic resource UpToDate, which provides current evidence-based clinical information at the point-of-care and is used by over a million clinicians across 32,000 organizations in 180 countries . While the relevance of UpToDate varies, it serves as a reliable and regularly updated source of a specialty-focused clinician-driven curation of the broader literature. Thus, we use citation in UpToDate as one metric to assess translation, especially given its quantifiable impact on patient care . Leveraging a dataset of more than 10,000 UpToDate articles, sampled every 3 months for the past decade (2011–2020), we provide the first thorough and most comprehensive characterization and understanding of the factors that influence the adoption of clinical research by tracing the path of 37,050 newly added references from 887 journals, as well as provide valuable insight into the variation of adoption across 18 non-surgical specialties by clinical topic, article type, geography, and over time. What fraction of the published literature is eventually cited in point-of-care resources? Among the 18 specialties included in our analysis, neurology had the highest citation rate; of the 85,843 research articles published in clinical neurology journals during our sampling window, 2057 (2.4%) were eventually cited at least once in UpToDate. Rheumatology (1442 cited of 62,681 published; 2.3%), hematology (2506 of 110,055; 2.3%), and pediatrics (2678 of 119,486; 2.2%) had similar citation rates. Three specialties had sub-percent citation rates: radiology (1214 cited of 165,985 published; 0.7%), geriatrics (64 of 9781; 0.6%), and pathology (317 of 69,343; 0.4%). All remaining specialties, including internal medicine, had between 1 and 2% of all published research eventually cited in UpToDate. The proportion of citations also varied substantially by article type . Practice guidelines represented the most likely article type to be cited, with 9 of the 18 specialties citing >13% (interquartile range [IQR] of 5.1–14.5%) of all practice guidelines published in their respective journals. Although clinical trials (especially phase III trials) were the second most likely article type to be cited (9 of 18 specialties citing >9.5% of all phase III clinical trials published during our sampling window [IQR 3.0–13.0%]), it was also the most variable (SD of 8.7%). In 9 of the 18 specialties, we observed that less than 1 in 10 phase III clinical trials were ever cited at the point-of-care . Of the top-performing specialties, the citation rate of clinical trials was distinctly high in internal medicine (299 cited of 822 phase III clinical trials published; 36.3%), pediatrics (8 of 48; 16.7%), and infectious diseases (15 of 99; 15.1%). Notably, no equivalence trial, among the 43 published across all 18 specialties, was ever cited. Comparatively, pragmatic clinical trials were only cited in 5 of the 18 specialties: oncology (50% of published pragmatic clinical trials cited), internal medicine (20.3%), endocrinology (11.1%), cardiology (8.0%), and pediatrics (7.7%). The remaining 13 specialties had a 0% citation rate for pragmatic clinical trials. Similarly, case reports were also unlikely to be cited at the point-of-care across all specialties, with only 3111 case reports (0.8%) cited of the 403,043 published in specialty journals during our sampling window. Which are the predominant article types cited in point-of-care resources? Despite a cumulative citation rate of <1%, case reports still represented the most common article type of those cited in 2 of the 18 specialties . Among the 1506 citations added from dermatology journals over the past decade, 501 (32.0%) were case reports. Similarly, of the 317 citations from pathology journals, 49 (15.5%) were case reports. Strikingly, case reports were also consistently among the three most commonly cited article types across all but six specialties (median of 7.1% of added citations were case reports; IQR 5.6–12.3%). By comparison, phase III clinical trials represented less than 1.0% of added citations in 9 of 18 specialties (IQR 0.2–1.9%). Of the 18 specialties, anesthesiology, cardiac and cardiovascular systems, critical care, geriatrics, internal medicine, and oncology tended to favor higher-quality evidence ; reviews/systematic reviews, practice-guidelines, and meta-analyses represented the three most cited article types among five of these six specialties. Oncology was relatively unique in that it was the only specialty where phase III clinical trials ranked among the most commonly cited article types; we counted 411 phase III clinical trials among the 3071 references added during our sampling window from oncology journals. What is the time-to-citation by specialty and article type? Time-to-citation did not vary meaningfully between specialties; 50% of articles were cited within a year of publication (IQR 0–4 years). There were significant differences, however, between article types . Phase III clinical trials had the shortest time-to-citation, with 75% cited within the year of publication (IQR 0–1 year). Meta-analyses, practice guidelines, and systematic reviews followed a similar, albeit slightly slower, trend. Case reports had the longest time-to-citation (median 3 years; IQR 1–9 years). Across all specialties, higher quality of evidence correlated with a shorter time-to-citation . Is journal impact factor predictive of either proportion of articles cited or time-to-citation in point-of-care resources? For 12 of the 18 medical specialties, journal impact factor was significantly correlated with the proportion of articles cited . In descending order, impact factor was significantly correlated with citation rate in: rheumatology (Spearman’s rho = 0.86, p=1.4 × 10 –6 ), infectious diseases (rho = 0.79, p=7.7 × 10 –5 ), hematology (rho = 0.69, p=8.1 × 10 –5 ), pediatrics (rho = 0.66, p=0.0001), gastroenterology and hepatology (rho = 0.53, p=3.6 × 10 –5 ), cardiac and cardiovascular systems (rho = 0.55, p=1.2 × 10 –6 ), internal medicine (rho = 0.49, p=2.3 × 10 –6 ), neurology (rho = 0.43, p=0.0086), dermatology (rho = 0.39, p=0.02), urology and nephrology (rho = 0.37, p=0.007), endocrinology and metabolism (rho = 0.32, p=0.02), and oncology (rho = 0.29, p=0.01). In other words, in these 12 specialties, journals with higher impact factors tended to have a larger fraction of their published articles cited at the point-of-care. visualizes the respective scatterplots labeled by journal. For the remaining six specialties, the relationship between impact factor and portion of cited articles was not significant (p>0.05). Analogously, journal impact factor was significantly and negatively correlated with time-to-citation for 10 of 18 specialties: infectious diseases (Spearman’s rho = –0.51, p=0.03), internal medicine (rho = –0.408, p=0.0001), hematology (rho = –0.407, p=0.03), pediatrics (rho = –0.40, p=0.03), dermatology (rho = –0.406, p=0.02), pathology (rho = –0.45, p=0.04), neurology (Spearman’s rho = –0.37, p=0.03), urology and nephrology (rho = –0.34, p=0.02), cardiac and cardiovascular systems (rho = –0.36, p=0.002), and oncology (rho = –0.31, p=0.006). In other words, articles from higher impact specialty journals tended to have a quicker time to citation . While the impact factor appears able to (partially) prioritize journals with greater (or quicker) than expected contributions to clinical practice, we sought to better quantify the impact of the journal on clinical practice using two previously introduced new indices (see Materials and methods): the clinical relevancy index (CRI) and the clinical immediacy index (CII). We calculated these indices for all journals in the 18 medical specialties discussed here in . What topics are over-(or under-)represented in abstracts cited in point-of-care resources, compared to uncited literature? Do these topics explain variation in time-to-citation? Abstract contents (as assessed using Unified Medical Language System [UMLS] concepts or terms) can significantly explain citation (versus not) in point-of-care resources, as well as variation in time-to-citation among cited abstracts ( and – ). While results for all 18 specialties are fascinating and informative, the Appendix 1 focuses on two specialties (cardiac and cardiovascular systems and endocrinology and metabolism). What is the influence of department-specific NIH funding on the absolute number of citations and time-to-citation? What is the impact of cumulative NIH funding? As we previously noted in curation of our dataset, it is often difficult to disentangle hospitals, medical schools, and affiliated research institutions. As such, to explore the role and impact of NIH funding, we use city (rather than individual institutions or hospitals) as the unit of analysis. Our analysis primarily focused on the United States for two reasons. Firstly, 35% of references from the 18 medical specialties cited in UpToDate were from the United States; by way of comparison, in 2019, 39% of all publications in PubMed were from the United States. Thus, our data was well powered for our funding analyses. Secondly, the NIH publicly releases its funding information with sufficient granularity and standardized specialty labels to enable the analysis. Our department-specific analysis combined eight specialties (cardiac and cardiovascular systems, critical care medicine, endocrinology and metabolism, gastroenterology and hepatology, geriatrics and gerontology, hematology, rheumatology, and oncology) under the ‘general and internal medicine’ specialty label because a large portion of the funding for these specialties occurs through the NIH department combining name of ‘internal medicine/medicine’ (i.e. there was no specific department labels for these subset of specialties). Average annual department-specific NIH funding correlated strongly with the absolute number of total citations, in the past decade, across all specialties ( and ): pathology (Spearman’s rho = 0.73, p=1.2 × 10 –10 ), neurology (rho = 0.70, p<2.2 × 10 –16 ), pediatrics (rho = 0.67, p<2.2 × 10 –16 ), radiology (rho = 0.64, p=4.8 × 10 –12 ), internal medicine (rho = 0.60, p<2.2 × 10 –16 ), dermatology (rho = 0.57, p=1.2 × 10 –9 ), urology and nephrology (rho = 0.57, p=3.0 × 10 –13 ), emergency medicine (rho = 0.52, p=2.1 × 10 –7 ), anesthesiology (rho = 0.48, p=9.0 × 10 –6 ), and infectious diseases (rho = 0.41, p=0.006). depicts city-labeled scatterplots that highlight, both, American cities (and institutions) that were successful at translating research back to the practice and cities that were particularly efficient. illustrates the cumulative correlation of NIH funding, across all medical and surgical specialties. In sharp contrast, to both the department specific and cumulative funding associations with number of citations, NIH funding did not correlate with time-to-citation in any specialty . Given the strength of correlation of NIH funding with the absolute number of citations across all medical specialties, we also sought to quantify the cost of one new added citation to the point-of-care (i.e. the slope) using a simple linear model. More concretely, we defined the model as a linear function between the average number of UpToDate citations from each city over the past 10 years and the average annual NIH department-specific funding between 2011 and 2020. This estimate may be interpreted as the approximate (indirect) cost of bringing clinical research to the bedside in NIH funding dollars, with the intercept being proportional to the initial investment ‘set-up’ cost. In descending order, a new citation at the point-of-care costs: $48,086.18 per new point-of-care UpToDate citation from urology and nephrology journals (± SE of $7410.68 and an intercept of $470,546.67), $34,529.29 dollars from dermatology journals (± $4043.66 and an intercept = $251,133.85), $13,286.72 from general and internal medicine specialty journals (± $746.58 and an intercept = $673,780.86), $10,655.93 dollars in emergency medicine (± $2795.21 and an intercept = $265,336.27), $6,832.46 dollars from pediatrics journals (± $756.08 and an intercept = $325,662.10), $6482.30 dollars from anesthesiology journals (± $1393.98 and intercept = $206,374.57), $6,227.91 dollars from radiology journals (± $1019.13 and intercept = $254,528.95), $6106.92 dollars from neurology journals (± $607.81 and intercept = $261,566.15), and $874.85 dollars from pathology journals (± $229.67 and an intercept = $174,163.00). The model was not significant for infectious diseases. We subsequently generated US-focused maps to highlight, per specialty, institutions, and cities successful at translating clinical research from specialty journals to the bedside ( and ). Among the 18 specialties included in our analysis, neurology had the highest citation rate; of the 85,843 research articles published in clinical neurology journals during our sampling window, 2057 (2.4%) were eventually cited at least once in UpToDate. Rheumatology (1442 cited of 62,681 published; 2.3%), hematology (2506 of 110,055; 2.3%), and pediatrics (2678 of 119,486; 2.2%) had similar citation rates. Three specialties had sub-percent citation rates: radiology (1214 cited of 165,985 published; 0.7%), geriatrics (64 of 9781; 0.6%), and pathology (317 of 69,343; 0.4%). All remaining specialties, including internal medicine, had between 1 and 2% of all published research eventually cited in UpToDate. The proportion of citations also varied substantially by article type . Practice guidelines represented the most likely article type to be cited, with 9 of the 18 specialties citing >13% (interquartile range [IQR] of 5.1–14.5%) of all practice guidelines published in their respective journals. Although clinical trials (especially phase III trials) were the second most likely article type to be cited (9 of 18 specialties citing >9.5% of all phase III clinical trials published during our sampling window [IQR 3.0–13.0%]), it was also the most variable (SD of 8.7%). In 9 of the 18 specialties, we observed that less than 1 in 10 phase III clinical trials were ever cited at the point-of-care . Of the top-performing specialties, the citation rate of clinical trials was distinctly high in internal medicine (299 cited of 822 phase III clinical trials published; 36.3%), pediatrics (8 of 48; 16.7%), and infectious diseases (15 of 99; 15.1%). Notably, no equivalence trial, among the 43 published across all 18 specialties, was ever cited. Comparatively, pragmatic clinical trials were only cited in 5 of the 18 specialties: oncology (50% of published pragmatic clinical trials cited), internal medicine (20.3%), endocrinology (11.1%), cardiology (8.0%), and pediatrics (7.7%). The remaining 13 specialties had a 0% citation rate for pragmatic clinical trials. Similarly, case reports were also unlikely to be cited at the point-of-care across all specialties, with only 3111 case reports (0.8%) cited of the 403,043 published in specialty journals during our sampling window. Despite a cumulative citation rate of <1%, case reports still represented the most common article type of those cited in 2 of the 18 specialties . Among the 1506 citations added from dermatology journals over the past decade, 501 (32.0%) were case reports. Similarly, of the 317 citations from pathology journals, 49 (15.5%) were case reports. Strikingly, case reports were also consistently among the three most commonly cited article types across all but six specialties (median of 7.1% of added citations were case reports; IQR 5.6–12.3%). By comparison, phase III clinical trials represented less than 1.0% of added citations in 9 of 18 specialties (IQR 0.2–1.9%). Of the 18 specialties, anesthesiology, cardiac and cardiovascular systems, critical care, geriatrics, internal medicine, and oncology tended to favor higher-quality evidence ; reviews/systematic reviews, practice-guidelines, and meta-analyses represented the three most cited article types among five of these six specialties. Oncology was relatively unique in that it was the only specialty where phase III clinical trials ranked among the most commonly cited article types; we counted 411 phase III clinical trials among the 3071 references added during our sampling window from oncology journals. Time-to-citation did not vary meaningfully between specialties; 50% of articles were cited within a year of publication (IQR 0–4 years). There were significant differences, however, between article types . Phase III clinical trials had the shortest time-to-citation, with 75% cited within the year of publication (IQR 0–1 year). Meta-analyses, practice guidelines, and systematic reviews followed a similar, albeit slightly slower, trend. Case reports had the longest time-to-citation (median 3 years; IQR 1–9 years). Across all specialties, higher quality of evidence correlated with a shorter time-to-citation . For 12 of the 18 medical specialties, journal impact factor was significantly correlated with the proportion of articles cited . In descending order, impact factor was significantly correlated with citation rate in: rheumatology (Spearman’s rho = 0.86, p=1.4 × 10 –6 ), infectious diseases (rho = 0.79, p=7.7 × 10 –5 ), hematology (rho = 0.69, p=8.1 × 10 –5 ), pediatrics (rho = 0.66, p=0.0001), gastroenterology and hepatology (rho = 0.53, p=3.6 × 10 –5 ), cardiac and cardiovascular systems (rho = 0.55, p=1.2 × 10 –6 ), internal medicine (rho = 0.49, p=2.3 × 10 –6 ), neurology (rho = 0.43, p=0.0086), dermatology (rho = 0.39, p=0.02), urology and nephrology (rho = 0.37, p=0.007), endocrinology and metabolism (rho = 0.32, p=0.02), and oncology (rho = 0.29, p=0.01). In other words, in these 12 specialties, journals with higher impact factors tended to have a larger fraction of their published articles cited at the point-of-care. visualizes the respective scatterplots labeled by journal. For the remaining six specialties, the relationship between impact factor and portion of cited articles was not significant (p>0.05). Analogously, journal impact factor was significantly and negatively correlated with time-to-citation for 10 of 18 specialties: infectious diseases (Spearman’s rho = –0.51, p=0.03), internal medicine (rho = –0.408, p=0.0001), hematology (rho = –0.407, p=0.03), pediatrics (rho = –0.40, p=0.03), dermatology (rho = –0.406, p=0.02), pathology (rho = –0.45, p=0.04), neurology (Spearman’s rho = –0.37, p=0.03), urology and nephrology (rho = –0.34, p=0.02), cardiac and cardiovascular systems (rho = –0.36, p=0.002), and oncology (rho = –0.31, p=0.006). In other words, articles from higher impact specialty journals tended to have a quicker time to citation . While the impact factor appears able to (partially) prioritize journals with greater (or quicker) than expected contributions to clinical practice, we sought to better quantify the impact of the journal on clinical practice using two previously introduced new indices (see Materials and methods): the clinical relevancy index (CRI) and the clinical immediacy index (CII). We calculated these indices for all journals in the 18 medical specialties discussed here in . Do these topics explain variation in time-to-citation? Abstract contents (as assessed using Unified Medical Language System [UMLS] concepts or terms) can significantly explain citation (versus not) in point-of-care resources, as well as variation in time-to-citation among cited abstracts ( and – ). While results for all 18 specialties are fascinating and informative, the Appendix 1 focuses on two specialties (cardiac and cardiovascular systems and endocrinology and metabolism). Abstract contents (as assessed using Unified Medical Language System [UMLS] concepts or terms) can significantly explain citation (versus not) in point-of-care resources, as well as variation in time-to-citation among cited abstracts ( and – ). While results for all 18 specialties are fascinating and informative, the Appendix 1 focuses on two specialties (cardiac and cardiovascular systems and endocrinology and metabolism). What is the impact of cumulative NIH funding? As we previously noted in curation of our dataset, it is often difficult to disentangle hospitals, medical schools, and affiliated research institutions. As such, to explore the role and impact of NIH funding, we use city (rather than individual institutions or hospitals) as the unit of analysis. Our analysis primarily focused on the United States for two reasons. Firstly, 35% of references from the 18 medical specialties cited in UpToDate were from the United States; by way of comparison, in 2019, 39% of all publications in PubMed were from the United States. Thus, our data was well powered for our funding analyses. Secondly, the NIH publicly releases its funding information with sufficient granularity and standardized specialty labels to enable the analysis. Our department-specific analysis combined eight specialties (cardiac and cardiovascular systems, critical care medicine, endocrinology and metabolism, gastroenterology and hepatology, geriatrics and gerontology, hematology, rheumatology, and oncology) under the ‘general and internal medicine’ specialty label because a large portion of the funding for these specialties occurs through the NIH department combining name of ‘internal medicine/medicine’ (i.e. there was no specific department labels for these subset of specialties). Average annual department-specific NIH funding correlated strongly with the absolute number of total citations, in the past decade, across all specialties ( and ): pathology (Spearman’s rho = 0.73, p=1.2 × 10 –10 ), neurology (rho = 0.70, p<2.2 × 10 –16 ), pediatrics (rho = 0.67, p<2.2 × 10 –16 ), radiology (rho = 0.64, p=4.8 × 10 –12 ), internal medicine (rho = 0.60, p<2.2 × 10 –16 ), dermatology (rho = 0.57, p=1.2 × 10 –9 ), urology and nephrology (rho = 0.57, p=3.0 × 10 –13 ), emergency medicine (rho = 0.52, p=2.1 × 10 –7 ), anesthesiology (rho = 0.48, p=9.0 × 10 –6 ), and infectious diseases (rho = 0.41, p=0.006). depicts city-labeled scatterplots that highlight, both, American cities (and institutions) that were successful at translating research back to the practice and cities that were particularly efficient. illustrates the cumulative correlation of NIH funding, across all medical and surgical specialties. In sharp contrast, to both the department specific and cumulative funding associations with number of citations, NIH funding did not correlate with time-to-citation in any specialty . Given the strength of correlation of NIH funding with the absolute number of citations across all medical specialties, we also sought to quantify the cost of one new added citation to the point-of-care (i.e. the slope) using a simple linear model. More concretely, we defined the model as a linear function between the average number of UpToDate citations from each city over the past 10 years and the average annual NIH department-specific funding between 2011 and 2020. This estimate may be interpreted as the approximate (indirect) cost of bringing clinical research to the bedside in NIH funding dollars, with the intercept being proportional to the initial investment ‘set-up’ cost. In descending order, a new citation at the point-of-care costs: $48,086.18 per new point-of-care UpToDate citation from urology and nephrology journals (± SE of $7410.68 and an intercept of $470,546.67), $34,529.29 dollars from dermatology journals (± $4043.66 and an intercept = $251,133.85), $13,286.72 from general and internal medicine specialty journals (± $746.58 and an intercept = $673,780.86), $10,655.93 dollars in emergency medicine (± $2795.21 and an intercept = $265,336.27), $6,832.46 dollars from pediatrics journals (± $756.08 and an intercept = $325,662.10), $6482.30 dollars from anesthesiology journals (± $1393.98 and intercept = $206,374.57), $6,227.91 dollars from radiology journals (± $1019.13 and intercept = $254,528.95), $6106.92 dollars from neurology journals (± $607.81 and intercept = $261,566.15), and $874.85 dollars from pathology journals (± $229.67 and an intercept = $174,163.00). The model was not significant for infectious diseases. We subsequently generated US-focused maps to highlight, per specialty, institutions, and cities successful at translating clinical research from specialty journals to the bedside ( and ). As we previously noted in curation of our dataset, it is often difficult to disentangle hospitals, medical schools, and affiliated research institutions. As such, to explore the role and impact of NIH funding, we use city (rather than individual institutions or hospitals) as the unit of analysis. Our analysis primarily focused on the United States for two reasons. Firstly, 35% of references from the 18 medical specialties cited in UpToDate were from the United States; by way of comparison, in 2019, 39% of all publications in PubMed were from the United States. Thus, our data was well powered for our funding analyses. Secondly, the NIH publicly releases its funding information with sufficient granularity and standardized specialty labels to enable the analysis. Our department-specific analysis combined eight specialties (cardiac and cardiovascular systems, critical care medicine, endocrinology and metabolism, gastroenterology and hepatology, geriatrics and gerontology, hematology, rheumatology, and oncology) under the ‘general and internal medicine’ specialty label because a large portion of the funding for these specialties occurs through the NIH department combining name of ‘internal medicine/medicine’ (i.e. there was no specific department labels for these subset of specialties). Average annual department-specific NIH funding correlated strongly with the absolute number of total citations, in the past decade, across all specialties ( and ): pathology (Spearman’s rho = 0.73, p=1.2 × 10 –10 ), neurology (rho = 0.70, p<2.2 × 10 –16 ), pediatrics (rho = 0.67, p<2.2 × 10 –16 ), radiology (rho = 0.64, p=4.8 × 10 –12 ), internal medicine (rho = 0.60, p<2.2 × 10 –16 ), dermatology (rho = 0.57, p=1.2 × 10 –9 ), urology and nephrology (rho = 0.57, p=3.0 × 10 –13 ), emergency medicine (rho = 0.52, p=2.1 × 10 –7 ), anesthesiology (rho = 0.48, p=9.0 × 10 –6 ), and infectious diseases (rho = 0.41, p=0.006). depicts city-labeled scatterplots that highlight, both, American cities (and institutions) that were successful at translating research back to the practice and cities that were particularly efficient. illustrates the cumulative correlation of NIH funding, across all medical and surgical specialties. In sharp contrast, to both the department specific and cumulative funding associations with number of citations, NIH funding did not correlate with time-to-citation in any specialty . Given the strength of correlation of NIH funding with the absolute number of citations across all medical specialties, we also sought to quantify the cost of one new added citation to the point-of-care (i.e. the slope) using a simple linear model. More concretely, we defined the model as a linear function between the average number of UpToDate citations from each city over the past 10 years and the average annual NIH department-specific funding between 2011 and 2020. This estimate may be interpreted as the approximate (indirect) cost of bringing clinical research to the bedside in NIH funding dollars, with the intercept being proportional to the initial investment ‘set-up’ cost. In descending order, a new citation at the point-of-care costs: $48,086.18 per new point-of-care UpToDate citation from urology and nephrology journals (± SE of $7410.68 and an intercept of $470,546.67), $34,529.29 dollars from dermatology journals (± $4043.66 and an intercept = $251,133.85), $13,286.72 from general and internal medicine specialty journals (± $746.58 and an intercept = $673,780.86), $10,655.93 dollars in emergency medicine (± $2795.21 and an intercept = $265,336.27), $6,832.46 dollars from pediatrics journals (± $756.08 and an intercept = $325,662.10), $6482.30 dollars from anesthesiology journals (± $1393.98 and intercept = $206,374.57), $6,227.91 dollars from radiology journals (± $1019.13 and intercept = $254,528.95), $6106.92 dollars from neurology journals (± $607.81 and intercept = $261,566.15), and $874.85 dollars from pathology journals (± $229.67 and an intercept = $174,163.00). The model was not significant for infectious diseases. We subsequently generated US-focused maps to highlight, per specialty, institutions, and cities successful at translating clinical research from specialty journals to the bedside ( and ). We have demonstrated that, depending on specialty, 0.4–2.4% of published clinical research is eventually cited in UpToDate. Our analysis also revealed several alarming trends: most clinical trials are never cited at the point-of-care – less than 1 in 10 phase III clinical trials are ever cited in 9 of 18 medical specialties. In the best-performing specialty (general and internal medicine), this citation reached a peak of 36%; that is, at least, 64% of trials are never cited. This was in line with a recent manual review of 125 randomized interventional clinical trials published in 2009–2010 in three disease areas (ischemic heart disease, diabetes mellitus, and lung cancer), which demonstrated only 26.4% of trials fulfilled four conditions of informativeness: importance of the clinical question, trial design, feasibility, and reporting of results . This trend was generally consistent among other higher quality-of-evidence research; 9 of 18 specialties had a citation rate of <13% for practice guidelines. Comparatively, while less than 1% of published case reports are ever cited, they represent one of the most commonly cited article types. For some specialties (e.g. dermatology), case-reports represented nearly a third of newly added references. The persistence of case reports as a resource to guide practice is itself not necessarily problematic; some are helpful in certain circumstances (e.g. to address rare conditions or phenomena that are hard to evaluate via other means). However, in some specialties, our results suggest that case reports outnumber most article types in UpToDate reference lists, including higher quality of evidence such as meta-analyses, systematic reviews, practice guidelines, and clinical trials. Further investigation of these case studies will highlight unmet clinical needs/questions that should be addressed with higher quality of evidence. Reassuringly, a subset of specialties (e.g. ‘cardiac and cardiovascular systems,’ ‘general and internal medicine,’ and ‘oncology’) did incorporate higher quality of evidence into point-of-care reference lists, with more clinical trials cited than all other specialties cumulatively. In-depth investigation of differences between specialties in how clinical trials are designed/funded and how practice guidelines are formulated will likely reveal strategies to translating clinical research that should be applied more broadly. Exploring over- and under-represented topics provided a fascinating perspective on how specialties prioritized particular topics, treatment paradigms, and clinical discoveries over the past decade. While a more thorough investigation is warranted, our preliminary study revealed that some specialties demonstrated a clear bias toward particular disease topics and treatment paradigms (e.g. cardiac and cardiovascular systems and oncology), while others were far more diverse (e.g. endocrinology and metabolism). The strong correlation of number of citations with NIH funding (both department specific and cumulatively) suggests that funding may, in part, dictate the research focus and, thus, which references are ultimately successful in making it back to the point-of-care. Limitations There are many possible reasons for the low rate of citation of published research noted in our analysis (e.g. it is possible that some of the published research does not adequately answer a particular clinical question). It is also quite likely that the problem is at least partly one of translation. Both practice guidelines and clinical trials have a low-citation rate despite their design and implementation requiring uncertainty or equipoise surrounding two or more care options (i.e. they are designed to help clinicians choose one treatment or diagnostic approach over another). Thus, the low rate of citation rate of clinical trials, practice guidelines, and other high-quality evidence (e.g. systematic reviews) itself suggests a translational block. To address this limitation of various factors causing the low-citation rate, we explicitly investigate the rate of citation of various article types (as well as analyze topic distributions of these article types) separately and cumulatively to delineate the quality of evidence independently of the global citation rate. Although it represents the largest and most comprehensive point-of-care resource, UpToDate is also just one perspective of how clinicians synthesize and integrate clinical research. Besides scholarly medical, nursing, and pharmacy journals (as major examples), many additional sources of information are readily available and accessed by these diverse stakeholders besides UpToDate. Examples include in person and online professional society communications and meetings, daily inter-professional interactions, local health system guideline consensus groups, access to clinicians who practice in ‘centers of excellence,’ and point-of-care decision support in electronic medical records. However, these are too informal and inaccessible for a systematic and comprehensive analysis of the translational highway between the clinical research enterprise and medical practice. Thus, UpToDate is a small but robust window into mapping and modeling translation. We also recognize that the relevance of UpToDate varies by specialty and training status, and thus, its contents do not necessarily reflect the breadth or depth of medical care provided in a subset of medical specialties (and by extension, the body of evidence that underpins that care). Thus, while we use citation in UpToDate as a metric of translation, citation does not necessarily indicate actual changes in practice; rather citation represents adoption of knowledge to support current approaches, inform new changes in practice, or highlight points of controversy. Importantly, strengthening our conclusions, UpToDate does serve as a reliable source of a specialty-based clinician-driven curation of the broader literature; its regularly updated reference lists accurately represent a clinician’s perspective on the ever-expanding literature . Thus, rather than viewing our analysis as a comprehensive look at all evidence that underpins all care, we suggest that this analysis be viewed as a standardized (cursory) survey of a fixed set of clinicians over the past decade on particular topics (defined by the scope of UpToDate articles). Our division of the literature (and medical journals) into subspecialties using Clarivate’s Journal Citation Reports admittedly does not capture the overlap/nuance of the boundaries between specialties (and journals); however, we believe it made our analyses much clearer and easier to understand. Where appropriate (e.g. citation rates of article types and cumulative NIH funding models), we analyzed all specialties together to enable us to retain a bird’s eye view on global trends across all 18 medical specialties. Conclusions Tracing the path of clinical research into medical practice reveals substantial variation in how specialties prioritize and adopt clinical research into practice. The success of a subset of specialties in incorporating a larger proportion of published research, as well as high(er) quality of evidence, demonstrates the existence of translational strategies that should be applied more broadly. While the findings are largely descriptive and exploratory, the dataset and method described here are designed to generate hypotheses regarding the translation of biomedical research into practice. In designing the dataset, we sought to provide a baseline for monitoring the efficiency of research investments and ultimately lead to the development of mechanisms for weighing the efficacy of reforms to the biomedical scientific enterprise (e.g. quantifying impact at point-of-care rather than number of publications or citations). There are many possible reasons for the low rate of citation of published research noted in our analysis (e.g. it is possible that some of the published research does not adequately answer a particular clinical question). It is also quite likely that the problem is at least partly one of translation. Both practice guidelines and clinical trials have a low-citation rate despite their design and implementation requiring uncertainty or equipoise surrounding two or more care options (i.e. they are designed to help clinicians choose one treatment or diagnostic approach over another). Thus, the low rate of citation rate of clinical trials, practice guidelines, and other high-quality evidence (e.g. systematic reviews) itself suggests a translational block. To address this limitation of various factors causing the low-citation rate, we explicitly investigate the rate of citation of various article types (as well as analyze topic distributions of these article types) separately and cumulatively to delineate the quality of evidence independently of the global citation rate. Although it represents the largest and most comprehensive point-of-care resource, UpToDate is also just one perspective of how clinicians synthesize and integrate clinical research. Besides scholarly medical, nursing, and pharmacy journals (as major examples), many additional sources of information are readily available and accessed by these diverse stakeholders besides UpToDate. Examples include in person and online professional society communications and meetings, daily inter-professional interactions, local health system guideline consensus groups, access to clinicians who practice in ‘centers of excellence,’ and point-of-care decision support in electronic medical records. However, these are too informal and inaccessible for a systematic and comprehensive analysis of the translational highway between the clinical research enterprise and medical practice. Thus, UpToDate is a small but robust window into mapping and modeling translation. We also recognize that the relevance of UpToDate varies by specialty and training status, and thus, its contents do not necessarily reflect the breadth or depth of medical care provided in a subset of medical specialties (and by extension, the body of evidence that underpins that care). Thus, while we use citation in UpToDate as a metric of translation, citation does not necessarily indicate actual changes in practice; rather citation represents adoption of knowledge to support current approaches, inform new changes in practice, or highlight points of controversy. Importantly, strengthening our conclusions, UpToDate does serve as a reliable source of a specialty-based clinician-driven curation of the broader literature; its regularly updated reference lists accurately represent a clinician’s perspective on the ever-expanding literature . Thus, rather than viewing our analysis as a comprehensive look at all evidence that underpins all care, we suggest that this analysis be viewed as a standardized (cursory) survey of a fixed set of clinicians over the past decade on particular topics (defined by the scope of UpToDate articles). Our division of the literature (and medical journals) into subspecialties using Clarivate’s Journal Citation Reports admittedly does not capture the overlap/nuance of the boundaries between specialties (and journals); however, we believe it made our analyses much clearer and easier to understand. Where appropriate (e.g. citation rates of article types and cumulative NIH funding models), we analyzed all specialties together to enable us to retain a bird’s eye view on global trends across all 18 medical specialties. Tracing the path of clinical research into medical practice reveals substantial variation in how specialties prioritize and adopt clinical research into practice. The success of a subset of specialties in incorporating a larger proportion of published research, as well as high(er) quality of evidence, demonstrates the existence of translational strategies that should be applied more broadly. While the findings are largely descriptive and exploratory, the dataset and method described here are designed to generate hypotheses regarding the translation of biomedical research into practice. In designing the dataset, we sought to provide a baseline for monitoring the efficiency of research investments and ultimately lead to the development of mechanisms for weighing the efficacy of reforms to the biomedical scientific enterprise (e.g. quantifying impact at point-of-care rather than number of publications or citations). We sampled all UpToDate articles (n=10,036 articles) multiple times over the past decade using the Internet Archive’s WayBackMachine; capturing 169,203 unique versions over a median of 39 months per article (IQR 16–73 months). The WayBackMachine is a digital archive of the World Wide Web that preserves archived copies of defunct or revised web pages. The reference list of each UpToDate article was subsequently extracted a median of 14 times (IQR 6–25 times) over its respective sampling window. The reference lists were subsequently filtered to exclude non-research references as defined by MEDLINE. Our final dataset consisted of 83,423 unique references from 4055 journals newly cited in the sampling window. The first version of each UpToDate article served as a baseline to enable us to calculate the time-to-citation for all references (an additional brief Methods Supplement [Appendix 1] provides additional details about UpToDate) For clarity, throughout the text, we use the shorter phrase ‘citation at the point-of-care’ as equivalent to ‘citation in an UpToDate article during our sampling window.’ We subsequently filtered the references to those published in non-surgical specialties as defined by the Clarivate’s Journal Citation Reports: 37,050 newly added unique references from 887 journals. To enable comparisons with the uncited literature, we used PubMed to identify all articles published during our sampling window in these 887 journals. These 2.4 million articles were similarly processed (i.e. matched to appropriate metadata). We subsequently paired all references with the corresponding entries in PubMed to extract the associated abstracts, author affiliations, and date of publication. Thus, our final dataset for analysis represented a curated list of all references added over the past 10 years to UpToDate, alongside relevant metadata (such as journal, year of citation, author affiliations, etc.). We extracted the UMLS concepts from the paired abstracts using SciSpacy , which enabled us to map the abstract free text to UMLS concepts . This is a similar pipeline used by PubMed to index articles for search engines and enabled us to extract ‘high-level’ concepts from the abstracts of all references. The performance of these algorithms (including validity and misclassification) are described elsewhere . For this manuscript, we subsequently filtered the references to those published in non-surgical specialties as defined by the Clarivate’s Journal Citation Reports (i.e. the categories specified in assessment of the impact factor): anesthesiology, cardiac and cardiovascular systems (i.e. cardiology), clinical neurology, critical care medicine, dermatology, emergency medicine, endocrinology and metabolism, gastroenterology and hepatology, geriatrics and gerontology, hematology, infectious diseases, medicine (general and internal), oncology, pathology, pediatrics, ‘radiology, nuclear medicine and medical imaging’ (i.e. radiology), rheumatology, and urology and nephrology. This filtered dataset subset included 37,050 newly added unique references from 887 journals, alongside relevant meta-data. To enable comparisons with the uncited literature, we used PubMed to identify all articles published during our sampling window in these 887 journals. These 2.4 million articles were similarly processed (i.e. matched to appropriate metadata). For all analyses, summary statistics were generated using base functions in R v4.1. Where appropriate, p-values were corrected for multiple testing using Benjamini-Hochberg. For all 887 journals, we also calculate two new indices: the CRI and the CII. Unlike the impact factor, these metrics exclusively quantify citations in point-of-care resources (i.e. UpToDate), rather than overall number of citations in other research publications and thus indirectly assess the presumed impact of any given journal on clinical practice. The CRI captures the long-standing impact of the journal over the past decade using a fraction of articles from the journal cited at least once in UpToDate and is defined as: . C R I d e c a d e = A r t i c l e s C i t e d i n U p T o D a t e i n p a s t d e c a d e T o t a l A r t i c l e s E v e r P u b l i s h e d p a s t d e c a d e Similarly, by using median time-to-citation, the CII captures journal specific trends in time-to-clinical-adoption (i.e. a measure of latency for each journal) that is distinct from the overall impact of the journal, defined as: . C I I d e c a d e = m e d i a n ( d a t e o f a d d e d c i t a t i o n s i n p a s t d e c a d e − d a t e o f p u b l i c a t i o n )
Does drug dispensing influence patients’ medication knowledge and medication adherence? A systematic review and meta-analysis
b3897bf1-5c78-4fce-915b-f4202d6d08d4
11776115
Health Literacy[mh]
Drug dispensing is a clinical service provided by pharmacists, especially in community pharmacies, and it assists a large number of people seeking medicine and/or pharmaceutical counseling . Although there are several definitions for drug dispensing, studies agree that this service involves the provision of medicine, along with counseling on how to use it . This service represents one of the last opportunities for pharmacists to prevent, identify and treat drug-related problems and promote the rational use of medicines . A systematic review published by Pizetta et al. (2021) showed that drug dispensing positively influences the patient’s clinical, humanistic and economic outcomes, such as asthma control, patient satisfaction, and cost-saving . Despite the importance of this evidence, other outcomes, such as the patient’s medication knowledge and medication adherence, also can be used to measure and assess the contribution of pharmacists to individuals and healthcare systems during dispensing . The relationship between patients’ medication knowledge and medication adherence is close and well-documented in the literature . Patient medication knowledge plays an important role in efficient disease management, since a lack of adequate comprehension about medicines can lead to many problems, such as medication non-adherence, pharmacotherapy failures and increase in adverse effects . Medication nonadherence is a concern in healthcare, as it is associated with negative health outcomes, such as an increase in hospitalization, decrease in the patient’s quality of life and an increase in the total expenses in healthcare . The literature shows that other clinical services, such as medication review and medication therapy management increase the patient’s medication knowledge and medication adherence . However, there is a gap in the literature on studies that summarize the evidence of the influence of drug dispensing, which has pharmacist counseling as a core element, in these outcomes. This systematic review aimed to evaluate the influence of drug dispensing on patients’ medication knowledge and medication adherence. This systematic review was conducted according to the Cochrane Handbook for Systematic Reviews of Interventions’ guidelines  and reported according to the Preferred Reporting Items for Systematic Reviews and Meta Analysis (PRISMA) . The systematic review has been registered on PRÓSPERO (CRD 42023425693). Concepts adopted in this study Drug dispensing is conceptualized in this study as the safe provision of medicine, along with counseling on how to use them . MeSH (Medical Subject Heading) concepts were used for patient’s medication knowledge and medication adherence. Thus, medication adherence is understood as the voluntary cooperation of the patient in taking medicine as prescribed, which includes the duration, dosage and frequency . The patient’s medication knowledge is considered their health knowledge related to medicines, including those being used and why, as well as instructions and precautions . Research question The PICO elements were used to structure the study question. PICO represents an acronym for: (P) patient or problem, (I) intervention or exposition to be investigated, (C) comparison intervention or exposition and (O) interest outcome. In this study, PICO corresponded to: (P) patients that received a drug dispensing service; (I) drug dispensing provided by pharmacists in community pharmacies; (C) not applicable; and (O) patient’s medication knowledge and medication adherence, which resulted in the following question: Does drug dispensing influence the patient’s medication knowledge and medication adherence? Search strategy A systematic search in literature was performed on March 9th, 2024, in the following databases: PubMed/Medline, Biblioteca Virtual da Saúde, Web of Science and Embase. A gray literature search was performed in Google Scholar through an analysis of the 100 first results, and on Open Thesis. The search strategy was elaborated utilizing descriptors related to controlled vocabulary (Medical Subject Headings—MeSH, Health Sciences descriptors and Emtree—Embase’s Therasus), as well as terms related to uncontrolled vocabulary. The terms were related to “dispensing”, “medication adherence”, “patient’s medication knowledge” and “pharmacy”. Terms within each concept are joined together with the Boolean ‘OR’ operator, the concepts are combined with the Boolean ‘AND’ operator, and the search strategy was adapted to each database. Moreover, the reference lists of all eligible studies were manually reviewed. The complete search strategy is available in Supplementary Material 1. Eligibility criteria Studies were eligible if they met the following criteria: (i) original studies that evaluated drug dispensing provided by pharmacists; (ii) studies performed in community pharmacies in an ambulatorial level (non-hospital); (iii) studies that evaluated the influence of drug dispensing on patient’s medication knowledge and/or medication adherence. The following exclusion criteria were applied: (i) review studies, conference abstracts; letters to the editor; (ii) studies which were not available in full; (iii) studies whose results did not separate the intervention of pharmacists from the interventions of other professionals; and (iv) the results did not separate drug dispensing from other services/interventions. The studies were not excluded based on the design, publication year or methodological quality. It is worth emphasizing that on the occasions in which the complete paper was not available, the researchers contacted the study’s authors through ResearchGate ( www.researchgate.net ) and/or via e-mail and through the Federal University of Espírito Santo’s Integrated Library System. Study selection The electronic search results were inserted in the online platform Rayyan ( http://rayyan.qcri.org ) and duplicated papers were excluded . Next, two researchers (EPCS; HRVJ) independently reviewed the titles and abstracts and then the full texts according to the eligibility criteria. In case of discrepancies between the two reviews, a third reviewer (KSSR) was assigned to resolve them. Data extraction Two reviewers independently extracted the data from the included papers using a standardized data extraction spreadsheet. The following data was extracted: authors, country, design, setting, objectives, outcomes, dispensing concept adopted in the study; instruments utilized to evaluate the knowledge and medication adherence; main results and limitations. The authors of this systematic review categorized the study designs of the articles that did not report this information . In case of discrepancies between the two reviews, a third reviewer was assigned to resolve them. Quality assessment Two researchers independently assessed the quality of the included studies according to the tools made available by the JBI Evidence Synthesis ( https://synthesismanual.jbi.global ): JBI Critical Appraisal Checklist for quasi-experimental studies ; JBI critical appraisal tool for the assessment of risk of bias for randomized controlled trials , JBI Critical Appraisal Checklist for analytical cross sectional studies  and the JBI Critical Appraisal Checklist for cohort studies . The items were judged as Yes, No, Unclear or Not applied. Discrepancies were resolved by a third reviewer. The data were presented in tables and reported as the percentage of compliance with the items. Data synthesis The data were descriptively presented in tables and figures when needed. A meta-analysis was conducted for randomized controlled trials which used the outcome of medication adherence using the RStudio version 4.3.3 software with the ‘Meta’ V.7.0–0 package. Adherence rate was used as the effect measure, with a 95% confidence interval (CI). The meta-analysis was performed using the random-effects model and the Mantel–Haenszel method. Heterogeneity was determined by calculating the I 2 (0–40% may not be important; 30–60% may represent moderate heterogeneity; 50–90% may represent substantial heterogeneity; and 75–100% may represent considerable heterogeneity) with a significance level of p < 0.10 . The other two intervention studies were pre-post intervention studies and did not provide sufficient data for pooling; therefore, they were not included in the meta-analysis. It was not possible to perform a meta-analysis for the patient’s medication knowledge outcome due to methodological issues related to the study designs. Drug dispensing is conceptualized in this study as the safe provision of medicine, along with counseling on how to use them . MeSH (Medical Subject Heading) concepts were used for patient’s medication knowledge and medication adherence. Thus, medication adherence is understood as the voluntary cooperation of the patient in taking medicine as prescribed, which includes the duration, dosage and frequency . The patient’s medication knowledge is considered their health knowledge related to medicines, including those being used and why, as well as instructions and precautions . The PICO elements were used to structure the study question. PICO represents an acronym for: (P) patient or problem, (I) intervention or exposition to be investigated, (C) comparison intervention or exposition and (O) interest outcome. In this study, PICO corresponded to: (P) patients that received a drug dispensing service; (I) drug dispensing provided by pharmacists in community pharmacies; (C) not applicable; and (O) patient’s medication knowledge and medication adherence, which resulted in the following question: Does drug dispensing influence the patient’s medication knowledge and medication adherence? A systematic search in literature was performed on March 9th, 2024, in the following databases: PubMed/Medline, Biblioteca Virtual da Saúde, Web of Science and Embase. A gray literature search was performed in Google Scholar through an analysis of the 100 first results, and on Open Thesis. The search strategy was elaborated utilizing descriptors related to controlled vocabulary (Medical Subject Headings—MeSH, Health Sciences descriptors and Emtree—Embase’s Therasus), as well as terms related to uncontrolled vocabulary. The terms were related to “dispensing”, “medication adherence”, “patient’s medication knowledge” and “pharmacy”. Terms within each concept are joined together with the Boolean ‘OR’ operator, the concepts are combined with the Boolean ‘AND’ operator, and the search strategy was adapted to each database. Moreover, the reference lists of all eligible studies were manually reviewed. The complete search strategy is available in Supplementary Material 1. Studies were eligible if they met the following criteria: (i) original studies that evaluated drug dispensing provided by pharmacists; (ii) studies performed in community pharmacies in an ambulatorial level (non-hospital); (iii) studies that evaluated the influence of drug dispensing on patient’s medication knowledge and/or medication adherence. The following exclusion criteria were applied: (i) review studies, conference abstracts; letters to the editor; (ii) studies which were not available in full; (iii) studies whose results did not separate the intervention of pharmacists from the interventions of other professionals; and (iv) the results did not separate drug dispensing from other services/interventions. The studies were not excluded based on the design, publication year or methodological quality. It is worth emphasizing that on the occasions in which the complete paper was not available, the researchers contacted the study’s authors through ResearchGate ( www.researchgate.net ) and/or via e-mail and through the Federal University of Espírito Santo’s Integrated Library System. The electronic search results were inserted in the online platform Rayyan ( http://rayyan.qcri.org ) and duplicated papers were excluded . Next, two researchers (EPCS; HRVJ) independently reviewed the titles and abstracts and then the full texts according to the eligibility criteria. In case of discrepancies between the two reviews, a third reviewer (KSSR) was assigned to resolve them. Two reviewers independently extracted the data from the included papers using a standardized data extraction spreadsheet. The following data was extracted: authors, country, design, setting, objectives, outcomes, dispensing concept adopted in the study; instruments utilized to evaluate the knowledge and medication adherence; main results and limitations. The authors of this systematic review categorized the study designs of the articles that did not report this information . In case of discrepancies between the two reviews, a third reviewer was assigned to resolve them. Two researchers independently assessed the quality of the included studies according to the tools made available by the JBI Evidence Synthesis ( https://synthesismanual.jbi.global ): JBI Critical Appraisal Checklist for quasi-experimental studies ; JBI critical appraisal tool for the assessment of risk of bias for randomized controlled trials , JBI Critical Appraisal Checklist for analytical cross sectional studies  and the JBI Critical Appraisal Checklist for cohort studies . The items were judged as Yes, No, Unclear or Not applied. Discrepancies were resolved by a third reviewer. The data were presented in tables and reported as the percentage of compliance with the items. The data were descriptively presented in tables and figures when needed. A meta-analysis was conducted for randomized controlled trials which used the outcome of medication adherence using the RStudio version 4.3.3 software with the ‘Meta’ V.7.0–0 package. Adherence rate was used as the effect measure, with a 95% confidence interval (CI). The meta-analysis was performed using the random-effects model and the Mantel–Haenszel method. Heterogeneity was determined by calculating the I 2 (0–40% may not be important; 30–60% may represent moderate heterogeneity; 50–90% may represent substantial heterogeneity; and 75–100% may represent considerable heterogeneity) with a significance level of p < 0.10 . The other two intervention studies were pre-post intervention studies and did not provide sufficient data for pooling; therefore, they were not included in the meta-analysis. It was not possible to perform a meta-analysis for the patient’s medication knowledge outcome due to methodological issues related to the study designs. A total of 7,590 studies were identified in the initial search, of which 11 papers met the eligibility criteria and were included in this systematic review. The selection process flowchart is shown in Fig. . The reasons for excluding articles after reviewing their complete text are described in Supplementary Material 2. Study characteristics The studies’ characteristics are described in Table . All 11 articles were published in English, between the years of 2009 and 2022. The studies were published in different countries in Africa, Latin America, Asia, Europe and Australia. The majority of the studies were intervention studies ( n = 7; 63.63%) . The number of patients in the studies varied from 39 to 20,334. The participants’ ages varied between 18 to 81 years old. The majority of the participants were female in seven studies (63.63%) . The participants in three studies (33.33%) had university and high school degrees . Three studies (27,27%) reported that the patients presented chronic conditions, such as osteoporosis, diabetes and hypertension . Instruments used to measure outcomes Regarding the instruments used to measure patients’ medication knowledge, two studies utilized validated questionnaires: Conocimiento del Paciente sobre sus Medicamentos (CPM-ES-ES)  and a Questionnaire validated by Fröhlich, Pizzol, Mengue (2010) . Medication adherence was measured in a heterogeneous manner, and the following methods were utilized: Pill Counting , questionnaires (Brief Medication Questionnaire , Culig questionnaire of adherence  and Medication Adherence Rating Scale—MARS-10 ), the covered days proportion method (this method refers to the number of medicine dispensed—multiplied by 7 for weekly dosages or for 30 for monthly dosages, when appropriate, divided by the number of days of usage—from first to last prescription) , and the Morisky Medication Adherence Scale (MMAS-4) (Table ). Drug dispensing concept adopted in studies Table shows the concepts of drug dispensing adopted in the studies. Most of the studies did not clearly detail the definition of the service. Although there were differences in the concepts, it was observed that patient counseling was an essential step of drug dispensing. Influence of drug dispensing on the outcomes Patient’s medication knowledge Four studies evaluated the influence of drug dispensing on patient’s medication knowledge. Three of these studies focused solely on patient’s medication knowledge , while one study also assessed medication adherence . Moreover, two of these studies were interventional  and two were observational . These studies reported that the patient’s knowledge increased after drug dispensing (Table ). Medication adherence Eight studies evaluated the influence of drug dispensing in medication adherence . Out of these, five studies were interventional  and two were observational . Three studies reported that drug dispensing significantly influenced the increase in medication adherence . In addition, there was no statistically relevant difference in medication adherence after drug dispensing in four studies . Table describes the influence of drug dispensing on the outcomes. The meta-analysis indicated that there was no statistically significant difference in medication adherence post-dispensing (RR: 1.19; 95%CI 0.99 to 1.43, p = 0.07). Despite the uncertainty in I 2 and Tau 2 due to the low number of studies, a moderate level of heterogeneity was observed (I 2 = 44%, p = 0.17) (Fig. ). Study quality evaluation The methodological quality of the studies is presented in Tables , , and . In relation to the three quasi-experimental studies, two of them (77.77%) met seven out of the nine evaluated criteria and one study (66.66%) met six of the evaluated criteria . Moreover, two of the four randomized clinical trials (84.61%) met 11 of the 13 evaluated criteria . One cross-sectional study (37.5%) met only three of the eight evaluated criteria, one study (87.5%) met seven criteria and one study (100%) met all of the criteria . The main limitations of the lower-quality research in this cross-sectional study were related to several issues, including the lack of clearly defined criteria for sample inclusion, limitations in exposure measurements, and the identification of confounding factors. One cohort study met seven (63.6%) of the 11 evaluated criteria . The studies’ characteristics are described in Table . All 11 articles were published in English, between the years of 2009 and 2022. The studies were published in different countries in Africa, Latin America, Asia, Europe and Australia. The majority of the studies were intervention studies ( n = 7; 63.63%) . The number of patients in the studies varied from 39 to 20,334. The participants’ ages varied between 18 to 81 years old. The majority of the participants were female in seven studies (63.63%) . The participants in three studies (33.33%) had university and high school degrees . Three studies (27,27%) reported that the patients presented chronic conditions, such as osteoporosis, diabetes and hypertension . Regarding the instruments used to measure patients’ medication knowledge, two studies utilized validated questionnaires: Conocimiento del Paciente sobre sus Medicamentos (CPM-ES-ES)  and a Questionnaire validated by Fröhlich, Pizzol, Mengue (2010) . Medication adherence was measured in a heterogeneous manner, and the following methods were utilized: Pill Counting , questionnaires (Brief Medication Questionnaire , Culig questionnaire of adherence  and Medication Adherence Rating Scale—MARS-10 ), the covered days proportion method (this method refers to the number of medicine dispensed—multiplied by 7 for weekly dosages or for 30 for monthly dosages, when appropriate, divided by the number of days of usage—from first to last prescription) , and the Morisky Medication Adherence Scale (MMAS-4) (Table ). Table shows the concepts of drug dispensing adopted in the studies. Most of the studies did not clearly detail the definition of the service. Although there were differences in the concepts, it was observed that patient counseling was an essential step of drug dispensing. Patient’s medication knowledge Four studies evaluated the influence of drug dispensing on patient’s medication knowledge. Three of these studies focused solely on patient’s medication knowledge , while one study also assessed medication adherence . Moreover, two of these studies were interventional  and two were observational . These studies reported that the patient’s knowledge increased after drug dispensing (Table ). Medication adherence Eight studies evaluated the influence of drug dispensing in medication adherence . Out of these, five studies were interventional  and two were observational . Three studies reported that drug dispensing significantly influenced the increase in medication adherence . In addition, there was no statistically relevant difference in medication adherence after drug dispensing in four studies . Table describes the influence of drug dispensing on the outcomes. The meta-analysis indicated that there was no statistically significant difference in medication adherence post-dispensing (RR: 1.19; 95%CI 0.99 to 1.43, p = 0.07). Despite the uncertainty in I 2 and Tau 2 due to the low number of studies, a moderate level of heterogeneity was observed (I 2 = 44%, p = 0.17) (Fig. ). Study quality evaluation The methodological quality of the studies is presented in Tables , , and . In relation to the three quasi-experimental studies, two of them (77.77%) met seven out of the nine evaluated criteria and one study (66.66%) met six of the evaluated criteria . Moreover, two of the four randomized clinical trials (84.61%) met 11 of the 13 evaluated criteria . One cross-sectional study (37.5%) met only three of the eight evaluated criteria, one study (87.5%) met seven criteria and one study (100%) met all of the criteria . The main limitations of the lower-quality research in this cross-sectional study were related to several issues, including the lack of clearly defined criteria for sample inclusion, limitations in exposure measurements, and the identification of confounding factors. One cohort study met seven (63.6%) of the 11 evaluated criteria . Four studies evaluated the influence of drug dispensing on patient’s medication knowledge. Three of these studies focused solely on patient’s medication knowledge , while one study also assessed medication adherence . Moreover, two of these studies were interventional  and two were observational . These studies reported that the patient’s knowledge increased after drug dispensing (Table ). Eight studies evaluated the influence of drug dispensing in medication adherence . Out of these, five studies were interventional  and two were observational . Three studies reported that drug dispensing significantly influenced the increase in medication adherence . In addition, there was no statistically relevant difference in medication adherence after drug dispensing in four studies . Table describes the influence of drug dispensing on the outcomes. The meta-analysis indicated that there was no statistically significant difference in medication adherence post-dispensing (RR: 1.19; 95%CI 0.99 to 1.43, p = 0.07). Despite the uncertainty in I 2 and Tau 2 due to the low number of studies, a moderate level of heterogeneity was observed (I 2 = 44%, p = 0.17) (Fig. ). The methodological quality of the studies is presented in Tables , , and . In relation to the three quasi-experimental studies, two of them (77.77%) met seven out of the nine evaluated criteria and one study (66.66%) met six of the evaluated criteria . Moreover, two of the four randomized clinical trials (84.61%) met 11 of the 13 evaluated criteria . One cross-sectional study (37.5%) met only three of the eight evaluated criteria, one study (87.5%) met seven criteria and one study (100%) met all of the criteria . The main limitations of the lower-quality research in this cross-sectional study were related to several issues, including the lack of clearly defined criteria for sample inclusion, limitations in exposure measurements, and the identification of confounding factors. One cohort study met seven (63.6%) of the 11 evaluated criteria . We found a limited number of studies in this systematic review which evaluated the influence of drug dispensing on the patient’s medication knowledge and medication adherence. We had expected to find a greater number of studies as it is one of the most traditionally practiced services by pharmacists in many countries . Although there is limited evidence on the influence of other clinical pharmacy services on patients’ medication knowledge, systematic reviews have shown a significant improvement in medication adherence among patients receiving pharmacist interventions . Specifically, regarding the dispensing service, a systematic review published by Pizetta et al. (2021) also found a limited number of studies that evaluated the influence of drug dispensing at community pharmacies on patient’s health outcomes . These data suggest that there is not a large body of evidence from studies which focus on evaluating the impact of this service. These findings can be explained (among other factors) by the characteristics of dispensing, usually described as a fast service and with obstacles for documentation, which may hinder conducting studies involving this service . Although a patient’s medication knowledge and medication adherence are not final or direct outcomes, they do help evaluate all the contributions made by the pharmacist to the patient . Thus, these outcomes might be utilized in future studies which aim to investigate the influence of dispensing on patient care. Drug dispensing influenced the increase in the patient’s medication knowledge in the majority of the studies in this review. Our results are similar to other studies which approach the influence of pharmacist interventions on the increase in patient knowledge . These findings were expected, as pharmaceutical counseling is an essential step of drug dispensing and of other clinical services . Pharmaceutical counseling increases the patient’s comprehension, guaranteeing that the medicine is being used in a safe and appropriate way, avoiding issues related to medicine use . Nonetheless, our findings should be interpreted with caution, as most of the studies were pre and post intervention. Although they are important designs, randomized clinical trials are still more indicated to evaluate the effectiveness of interventions, as they avoid potential biases . Future studies can be conducted to assess the influence of clinical pharmacy services on patients’ medication knowledge and its effects on empowerment and self-management abilities in relation to their treatment. The results of the meta-analysis indicated that there was no statistically significant difference in medication adherence before and after dispensing. Systematic reviews report that other clinical pharmacy services increase the medication adherence, contributing to satisfactory results . It is worth noting that medication adherence is complex and factors such as characteristics of patients, prescribers, the health system, factors related to therapy, and health/disease conditions can influence medication adherence . Therefore, differences in variables between studies included in the meta-analysis, such as socioeconomic status, comorbidities, and medications dispensed, may influence non-adherence beyond the pharmacist’s control. In turn, an isolated strategy such as drug dispensing may not be sufficient to solve this problem, but it can contribute to some degree to solving this problem. Longitudinal studies are necessary to evaluate how consistent medication adherence over extended periods affects health outcomes, beyond short-term improvements. Regarding the evaluation of the outcomes, most of the studies used validated instruments, with the most common ones being the Conocimiento del Paciente sobre sus Medicamentos (CPM-ES-ES) to evaluate knowledge, and the validated questionnaire of Morisky Medication Adherence Scale to evaluate medication adherence. Using valid and reliable instruments can contribute to guarantee that the interested construct is in fact measured, which contributes to obtaining precise, reliable and representative results . Considering this, these instruments may be utilized by future studies which intend to evaluate these outcomes in drug dispensing, which will also facilitate comparisons of the findings. This systematic review has many strengths and limitations. Although our search strategy aimed to balance sensitivity and precision to ensure the inclusion of as many relevant studies as possible in the review, we cannot rule out the possibility of selection bias. It was not possible to perform a meta-analysis for the patient’s medication knowledge outcome due to methodological issues related to the study designs. Although studies agree that pharmaceutical counseling is an essential step in drug dispensing, there was no consensus on the concept of drug dispensing in the studies. Thus, the lack of standardization in the work process of this service is also an important factor, which can influence its quality . As far as we know, this is the first systematic review with metanalysis performed to identify the influence of drug dispensing in the patient’s medication knowledge and medication adherence. Our findings can be used by researchers to guide future studies that intend to evaluate the impact of this service, and also by pharmacists and managers to guide clinical practices. Furthermore, we have followed a rigorous methodological process, following the main guidelines to the execution of systematic reviews. Finally, no article was excluded based on methodological quality or study design. The results of this systematic review suggest that drug dispensing may lead to an increase in patients’ medication knowledge. The meta-analysis indicated that there was no statistically significant difference in medication adherence after dispensing. Further studies are needed to confirm this finding. Our findings may help pharmacists measure the influence of their practice on patients’ medication knowledge and adherence. In addition, it may provide evidence to support pharmacists and managers in seeing the dispensing service, including medication counseling as a core component, as an opportunity for improving patient care. This systematic review may contribute to evidence-based decision making, serving as a base for planning and developing public policies and interventions which encourage implementing and qualifying drug dispensing to fulfill the needs of patients, their families and communities. Supplementary Material 1. Supplementary Material 2.
Immunosuppression by hydroxychloroquine: mechanistic proof in in vitro experiments but limited systemic activity in a randomized placebo-controlled clinical pharmacology study
3ea42841-858f-4299-ba82-2fcaabe48fc7
9945836
Pharmacology[mh]
Hydroxychloroquine (HCQ) is a broad immunosuppressive drug, initially developed as an antimalarial drug. However, due to its anti-inflammatory properties, HCQ is now widely used in the treatment of autoimmune diseases such as rheumatoid arthritis (RA) , systemic lupus erythematosus (SLE) , and Sjögren’s syndrome . The use of HCQ in other diseases has been under investigation, a pilot trial investigating the use of HCQ in patients after myocardial infarction showed a decrease in plasma IL-6 levels compared to placebo, and a larger trial studying the effect on recurrent cardiovascular events is currently ongoing . Furthermore, HCQ was under investigation for use in moderate to severe COVID-19 patients during the COVID-19 pandemic . The exact mechanisms behind HCQ immunosuppressive functions remain unclear. HCQ accumulates in the lysosomes and inhibits lysosomal function by autophagosome fusion with lysosomes , thereby inhibiting antigen presentation . In addition, HCQ inhibits proinflammatory cytokine production by myeloid cells, possibly via the inhibition of endosomal Toll-like receptor (TLR) signaling . It has been shown that HCQ treatment is associated with decreased interferon (IFN)α serum levels in SLE patients . Furthermore, several studies investigating the effect of HCQ on peripheral blood mononuclear cells (PBMCs) or cell lines show that HCQ treatment reduces phorbol 12-myristate 13-acetate (PMA) and ionomycin or lipopolysaccharide-induced cytokine production . Besides effects on the innate immune system, HCQ affects the adaptive immune response as well. It has been shown that HCQ inhibits differentiation of class-switched memory B cells into plasmablasts and thereby decreases IgG production in response to TLR9 stimulation or inoculation with inactivated virus . HCQ inhibits T cell activation as well, via the inhibition of T cell receptor-induced calcium mobilization and dysregulation of mitochondrial superoxide production . However, the concentrations used in such in vitro experiments studying the immunomodulatory effects of HCQ largely exceeded obtainable clinical concentrations in patients. A study in cutaneous lupus erythematosus patients receiving HCQ in clinical doses showed that higher HCQ blood levels corresponded with lower ex vivo IFNα responses after TLR9 stimulation, but not after TLR7/8 stimulation . Moreover, influenza antibody titers after vaccination in Sjögren’s syndrome patients receiving HCQ were lower compared to HCQ naïve patients . Unfortunately, little additional literature is available on the in vivo immunomodulatory effects of HCQ and comparing it to in vitro experiments. We aimed to assess and quantify the immunomodulatory effects of HCQ on primary human immune cells, both in vitro and ex vivo in a randomized clinical trial. We assessed the effect of HCQ on cytokine production after endosomal TLR stimulation in isolated PBMCs and on T and B cell proliferation (in vitro as well as ex vivo). In the clinical trial, healthy subjects were dosed with HCQ in the standard dosing regimen for moderate-to-severe COVID-19 that was advised in the Netherlands when the study was conceived. In the study design, we accounted for a potential age effect on the study outcomes, since general immunocompetence and drug metabolism have been reported to be age-dependent . Here, we present the outcomes of the in vitro experiment and the randomized clinical trial. In vitro experiments Blood was collected by venipuncture using sodium heparin vacutainer tubes or Cell Preparation Tubes (CPT, Becton Dickinson, Franklin Lakes, NJ, USA) from healthy volunteers after written informed consent, in accordance with Good Clinical Practice guidelines and the Declaration of Helsinki. Blood was used for the evaluation of the in vitro immunomodulatory activity of hydroxychloroquine (10–10,000 ng/mL, Sigma-Aldrich, Deisenhofen, Germany). All experiments were started within one hour after blood withdrawal, and incubations were performed in duplicate. Hydroxychloroquine and stimulant were added simultaneously. Per experiment, blood of 6 donors was used. Clinical study We conducted a single-blind, randomized, placebo-controlled multiple dose study in forty healthy male volunteers, comprising twenty young (18–30 years) and twenty elderly (65–75 years) subjects. The study was conducted at the Centre for Human Drug Research in Leiden, The Netherlands, between June and September 2020, during the COVID-19 pandemic. All subjects in the clinical trial gave written informed consent according to Declaration of Helsinki recommendations, prior to any study-related activity. The study was approved by the Independent Ethics Committee of the Foundation “Evaluation of Ethics in Biomedical Research” (Stichting Beoordeling Ethiek Biomedisch Onderzoek, Assen, The Netherlands) and registered in the Toetsingonline Registry (study number NL73816.056.20) and in the International Clinical Trials Registry Platform (NL8726). Volunteer selection To avoid sex-related interindividual variability in immune responses, only male subjects were included . Subjects were included if they were overtly healthy. The health status of subjects was assessed by medical screening, including medical history, physical examination, vital signs measurements, 12-lead electrocardiography (ECG), urine analysis, drug screen and safety chemistry, coagulation, and hematology blood sampling. BMI of study participants had to be between 18 and 32 kg/m 2 . Subjects with a known hypersensitivity reaction to chloroquine, HCQ, or other 4-aminoquinolines, abnormalities in the resting ECG (including QTcF interval > 450 ms), evidence of any active or chronic disease or condition (including long QT syndrome, retinal disease, G6PD deficiency, autoimmune diseases, diabetes mellitus type I or II, and psychiatric disorders), or a positive SARS-CoV-2 PCR test were excluded from study participation. Use of concomitant medication was not permitted during the study and 14 days (or 5 half-lives) prior to the study drug administration, except for paracetamol. Study design Subjects were randomized to receive either hydroxychloroquine sulfate (Plaquenil®) or placebo tablets, in a 1:1 ratio. Tablets were dispensed by the pharmacy, according to a randomization list generated by a study-independent statistician. Plaquenil® and placebo tablets were packaged in the same way, but the tablets were not indistinguishable; study drug administration was therefore performed by dedicated unblinded personnel not involved in any other study tasks. Subjects received HCQ or placebo by a loading dose of 400 mg twice daily ( [12pt]{minimal} $$t=0 $$ t = 0 h and [12pt]{minimal} $$t=12 $$ t = 12 h ) followed by a 400 mg once daily dose regimen ( [12pt]{minimal} $$t=24 $$ t = 24 h , [12pt]{minimal} $$t=48 $$ t = 48 h , [12pt]{minimal} $$t=72 $$ t = 72 h , and [12pt]{minimal} $$t=96 $$ t = 96 h ), giving a cumulative dose of 2400 mg. This reflected the standard dosing regimen for moderate-to-severe COVID-19 patients in the Netherlands when the study was conceived (total dose between 2000 and 3800 mg). Pharmacokinetic evaluation For pharmacokinetic (PK) assessments, blood was collected in 3 mL Vacutainer® K 2 EDTA tubes (Becton Dickinson) on study day 0 (baseline and 3 h postdosing) and days 1, 4, and 9 (3 h postdosing). Hydroxychloroquine plasma concentrations were measured by Ardena Bioanalytical Laboratory (Assen, the Netherlands) using a validated LC–MS/MS method. The lower limit of quantification (LLOQ) of the analysis was 5 ng/mL. Whole blood stimulation Whole blood was stimulated with 10 μg/mL phytohemagglutinin (PHA, Sigma-Aldrich) for 6 h and 24 h. After 6 h, activation markers on T cells were measured using CD69-APC (clone: REA824), CD71-FITC (clone: REA902), CD154-VioBlue (REA238) and CD25-PE (clone: 3G10), CD3-VioGreen (REA613), CD4-APC-Vio770 (REA623), and CD8-PE-Vio770 (REA734) antibodies and propidium iodide as viability dye (all Miltenyi Biotec, Bergisch-Gladbach, Germany) using a MACSQuant 16 analyzer (Miltenyi Biotec). After 24 h, culture supernatants were collected for cytokine analysis. PBMC isolation and TLR stimulation PBMCs were isolated from CPT after centrifugation at 1800 × g for 30 min and washed 2 × using phosphate-buffered saline (PBS, pH 7.2, Gibco, Thermo Fisher, Waltham, MA, USA). PBMCs were stimulated with endosomal TLR ligands poly I:C (TLR3, 50 μg/mL), imiquimod (TLR7, 1 μg/mL), CpG class A (TLR9, oligodeoxynucleotides (ODN) 2.5 μM), and poly I:C/lyovec (RIG-I, 1 μg/mL; all Invivogen, Toulouse, France). Supernatants were collected after 24 h for cytokine quantification. Proliferation assay PBMCs were stained with 2.5 μM cell trace violet (CTV, Thermo Fisher) according to user’s manual. T cells were stimulated with 5 μg/mL phytohemagglutinin (PHA) and B cells with a monoclonal CD40 antibody (5 μg/mL; clone: G28.5, BioXCell) and CpG class B (2.5 μM; ODN Invivogen). After 5 days of stimulation, PBMCs were stained using CD4-PE (clone: OKT4), CD8-APC (clone: HIT8a), CD19-PE (clone: HIB19, all Biolegend, San Diego, CA, USA), and fixable viability dye eFluor780 (Thermo Fisher) and proliferation was quantified by flow cytometry, using the MACSQuant 16 analyzer. Flow cytometry Circulating leukocyte subsets were analyzed using flow cytometry. Red blood cell lysis was performed on sodium heparinized blood using RBC lysis buffer (Thermo Fisher Scientific). After washing with PBS (pH 7.2), leukocytes were incubated with fluorochrome-labeled antibodies for 30 min on ice. After a final washing step, leukocytes were measured on a MACSQuant 16 analyzer (Miltenyi Biotec). See supplemental table for a full list of antibodies used. Cytokine measurements IFNγ and IL-2 were quantified using the Vplex-2 kit (Meso Scale Discovery). IFNα and IL-6 were quantified using the pan-specific IFNα ELISA pro HRP kit and the IL-6 ELISA pro HRP kit (both Mabtech, Nacka Strand, Sweden). Statistical analysis In vitro data are reported as mean ± standard deviation (SD). The IC 50 was calculated using an inhibitory sigmoid Emax function where applicable. Analyses were performed using GraphPad Prism version 6.05 (GraphPad, San Diego, CA, USA). Repeatedly measured pharmacodynamic data were evaluated with a mixed model analysis of variance with fixed factors treatment, age group, time, treatment by time, age group by time, treatment by age group, and treatment by age group by time and a random factor subject and the average prevalue as covariate. If needed, variables were log transformed before analysis. Contrasts between the placebo and HCQ treatment groups were calculated per endpoint. In addition, a potential age-specific HCQ effect was evaluated by comparing the 18–30 years with the 65–75 years age group. For the contrasts, an estimate of the difference (back-transformed in percentage for log-transformed parameters), a 95% confidence interval (in percentage for log-transformed parameters), least square means (geometric means for log transformed parameters), and the [12pt]{minimal} $$p$$ p value were calculated. A [12pt]{minimal} $$p$$ p value ≤ 0.05 was considered to be statistically significant. All calculations were performed using SAS for Windows V9.4 (SAS Institute, Inc., Cary, NC, USA). Blood was collected by venipuncture using sodium heparin vacutainer tubes or Cell Preparation Tubes (CPT, Becton Dickinson, Franklin Lakes, NJ, USA) from healthy volunteers after written informed consent, in accordance with Good Clinical Practice guidelines and the Declaration of Helsinki. Blood was used for the evaluation of the in vitro immunomodulatory activity of hydroxychloroquine (10–10,000 ng/mL, Sigma-Aldrich, Deisenhofen, Germany). All experiments were started within one hour after blood withdrawal, and incubations were performed in duplicate. Hydroxychloroquine and stimulant were added simultaneously. Per experiment, blood of 6 donors was used. We conducted a single-blind, randomized, placebo-controlled multiple dose study in forty healthy male volunteers, comprising twenty young (18–30 years) and twenty elderly (65–75 years) subjects. The study was conducted at the Centre for Human Drug Research in Leiden, The Netherlands, between June and September 2020, during the COVID-19 pandemic. All subjects in the clinical trial gave written informed consent according to Declaration of Helsinki recommendations, prior to any study-related activity. The study was approved by the Independent Ethics Committee of the Foundation “Evaluation of Ethics in Biomedical Research” (Stichting Beoordeling Ethiek Biomedisch Onderzoek, Assen, The Netherlands) and registered in the Toetsingonline Registry (study number NL73816.056.20) and in the International Clinical Trials Registry Platform (NL8726). Volunteer selection To avoid sex-related interindividual variability in immune responses, only male subjects were included . Subjects were included if they were overtly healthy. The health status of subjects was assessed by medical screening, including medical history, physical examination, vital signs measurements, 12-lead electrocardiography (ECG), urine analysis, drug screen and safety chemistry, coagulation, and hematology blood sampling. BMI of study participants had to be between 18 and 32 kg/m 2 . Subjects with a known hypersensitivity reaction to chloroquine, HCQ, or other 4-aminoquinolines, abnormalities in the resting ECG (including QTcF interval > 450 ms), evidence of any active or chronic disease or condition (including long QT syndrome, retinal disease, G6PD deficiency, autoimmune diseases, diabetes mellitus type I or II, and psychiatric disorders), or a positive SARS-CoV-2 PCR test were excluded from study participation. Use of concomitant medication was not permitted during the study and 14 days (or 5 half-lives) prior to the study drug administration, except for paracetamol. Study design Subjects were randomized to receive either hydroxychloroquine sulfate (Plaquenil®) or placebo tablets, in a 1:1 ratio. Tablets were dispensed by the pharmacy, according to a randomization list generated by a study-independent statistician. Plaquenil® and placebo tablets were packaged in the same way, but the tablets were not indistinguishable; study drug administration was therefore performed by dedicated unblinded personnel not involved in any other study tasks. Subjects received HCQ or placebo by a loading dose of 400 mg twice daily ( [12pt]{minimal} $$t=0 $$ t = 0 h and [12pt]{minimal} $$t=12 $$ t = 12 h ) followed by a 400 mg once daily dose regimen ( [12pt]{minimal} $$t=24 $$ t = 24 h , [12pt]{minimal} $$t=48 $$ t = 48 h , [12pt]{minimal} $$t=72 $$ t = 72 h , and [12pt]{minimal} $$t=96 $$ t = 96 h ), giving a cumulative dose of 2400 mg. This reflected the standard dosing regimen for moderate-to-severe COVID-19 patients in the Netherlands when the study was conceived (total dose between 2000 and 3800 mg). To avoid sex-related interindividual variability in immune responses, only male subjects were included . Subjects were included if they were overtly healthy. The health status of subjects was assessed by medical screening, including medical history, physical examination, vital signs measurements, 12-lead electrocardiography (ECG), urine analysis, drug screen and safety chemistry, coagulation, and hematology blood sampling. BMI of study participants had to be between 18 and 32 kg/m 2 . Subjects with a known hypersensitivity reaction to chloroquine, HCQ, or other 4-aminoquinolines, abnormalities in the resting ECG (including QTcF interval > 450 ms), evidence of any active or chronic disease or condition (including long QT syndrome, retinal disease, G6PD deficiency, autoimmune diseases, diabetes mellitus type I or II, and psychiatric disorders), or a positive SARS-CoV-2 PCR test were excluded from study participation. Use of concomitant medication was not permitted during the study and 14 days (or 5 half-lives) prior to the study drug administration, except for paracetamol. Subjects were randomized to receive either hydroxychloroquine sulfate (Plaquenil®) or placebo tablets, in a 1:1 ratio. Tablets were dispensed by the pharmacy, according to a randomization list generated by a study-independent statistician. Plaquenil® and placebo tablets were packaged in the same way, but the tablets were not indistinguishable; study drug administration was therefore performed by dedicated unblinded personnel not involved in any other study tasks. Subjects received HCQ or placebo by a loading dose of 400 mg twice daily ( [12pt]{minimal} $$t=0 $$ t = 0 h and [12pt]{minimal} $$t=12 $$ t = 12 h ) followed by a 400 mg once daily dose regimen ( [12pt]{minimal} $$t=24 $$ t = 24 h , [12pt]{minimal} $$t=48 $$ t = 48 h , [12pt]{minimal} $$t=72 $$ t = 72 h , and [12pt]{minimal} $$t=96 $$ t = 96 h ), giving a cumulative dose of 2400 mg. This reflected the standard dosing regimen for moderate-to-severe COVID-19 patients in the Netherlands when the study was conceived (total dose between 2000 and 3800 mg). For pharmacokinetic (PK) assessments, blood was collected in 3 mL Vacutainer® K 2 EDTA tubes (Becton Dickinson) on study day 0 (baseline and 3 h postdosing) and days 1, 4, and 9 (3 h postdosing). Hydroxychloroquine plasma concentrations were measured by Ardena Bioanalytical Laboratory (Assen, the Netherlands) using a validated LC–MS/MS method. The lower limit of quantification (LLOQ) of the analysis was 5 ng/mL. Whole blood was stimulated with 10 μg/mL phytohemagglutinin (PHA, Sigma-Aldrich) for 6 h and 24 h. After 6 h, activation markers on T cells were measured using CD69-APC (clone: REA824), CD71-FITC (clone: REA902), CD154-VioBlue (REA238) and CD25-PE (clone: 3G10), CD3-VioGreen (REA613), CD4-APC-Vio770 (REA623), and CD8-PE-Vio770 (REA734) antibodies and propidium iodide as viability dye (all Miltenyi Biotec, Bergisch-Gladbach, Germany) using a MACSQuant 16 analyzer (Miltenyi Biotec). After 24 h, culture supernatants were collected for cytokine analysis. PBMCs were isolated from CPT after centrifugation at 1800 × g for 30 min and washed 2 × using phosphate-buffered saline (PBS, pH 7.2, Gibco, Thermo Fisher, Waltham, MA, USA). PBMCs were stimulated with endosomal TLR ligands poly I:C (TLR3, 50 μg/mL), imiquimod (TLR7, 1 μg/mL), CpG class A (TLR9, oligodeoxynucleotides (ODN) 2.5 μM), and poly I:C/lyovec (RIG-I, 1 μg/mL; all Invivogen, Toulouse, France). Supernatants were collected after 24 h for cytokine quantification. PBMCs were stained with 2.5 μM cell trace violet (CTV, Thermo Fisher) according to user’s manual. T cells were stimulated with 5 μg/mL phytohemagglutinin (PHA) and B cells with a monoclonal CD40 antibody (5 μg/mL; clone: G28.5, BioXCell) and CpG class B (2.5 μM; ODN Invivogen). After 5 days of stimulation, PBMCs were stained using CD4-PE (clone: OKT4), CD8-APC (clone: HIT8a), CD19-PE (clone: HIB19, all Biolegend, San Diego, CA, USA), and fixable viability dye eFluor780 (Thermo Fisher) and proliferation was quantified by flow cytometry, using the MACSQuant 16 analyzer. Circulating leukocyte subsets were analyzed using flow cytometry. Red blood cell lysis was performed on sodium heparinized blood using RBC lysis buffer (Thermo Fisher Scientific). After washing with PBS (pH 7.2), leukocytes were incubated with fluorochrome-labeled antibodies for 30 min on ice. After a final washing step, leukocytes were measured on a MACSQuant 16 analyzer (Miltenyi Biotec). See supplemental table for a full list of antibodies used. IFNγ and IL-2 were quantified using the Vplex-2 kit (Meso Scale Discovery). IFNα and IL-6 were quantified using the pan-specific IFNα ELISA pro HRP kit and the IL-6 ELISA pro HRP kit (both Mabtech, Nacka Strand, Sweden). In vitro data are reported as mean ± standard deviation (SD). The IC 50 was calculated using an inhibitory sigmoid Emax function where applicable. Analyses were performed using GraphPad Prism version 6.05 (GraphPad, San Diego, CA, USA). Repeatedly measured pharmacodynamic data were evaluated with a mixed model analysis of variance with fixed factors treatment, age group, time, treatment by time, age group by time, treatment by age group, and treatment by age group by time and a random factor subject and the average prevalue as covariate. If needed, variables were log transformed before analysis. Contrasts between the placebo and HCQ treatment groups were calculated per endpoint. In addition, a potential age-specific HCQ effect was evaluated by comparing the 18–30 years with the 65–75 years age group. For the contrasts, an estimate of the difference (back-transformed in percentage for log-transformed parameters), a 95% confidence interval (in percentage for log-transformed parameters), least square means (geometric means for log transformed parameters), and the [12pt]{minimal} $$p$$ p value were calculated. A [12pt]{minimal} $$p$$ p value ≤ 0.05 was considered to be statistically significant. All calculations were performed using SAS for Windows V9.4 (SAS Institute, Inc., Cary, NC, USA). Hydroxychloroquine suppressed endosomal TLR-induced IFNα and IL-6 release in vitro PBMCs were stimulated with endosomal TLR ligands in the presence of a dose range of HCQ for 24 h, and supernatants were analyzed for IRF-mediated IFNα and for NFκB-mediated IL-6 secretion. PBMCs were stimulated with different endosomal TLR ligands: poly I:C (TLR3), imiquimod (TLR7), CpG class A (TLR9), and poly I:C lyovec (RIG-I). HCQ dose-dependently inhibited endosomal TLR-induced IFNα and IL-6 secretion (Fig. ). Poly I:C-induced IFNα and IL-6 release was strongly suppressed at 10.000 ng/mL (IFNα: − 83.9%, IL-6: − 96.6%, IC 50 IL-6 = 637.2 ng/mL). Imiquimod (IMQ)-induced cytokine release was completely suppressed at the highest concentration (IFNα: − 96.3%, IL-6: − 96.3%, IC 50 IFNα: 695.8 ng/mL, IL-6: 237.9 ng/mL). The same was observed for stimulation with CpG class A, IFNα was suppressed by 99.6% with an IC 50 of 145.3 ng/mL, and IL-6 was suppressed by 96.4%, with an IC 50 of 86.9 ng/mL. The RIG-I response to poly I:C/lyovec was less affected by HCQ, while IFNα release was suppressed by 66.1% at 10,000 ng/mL HCQ; IL-6 release was not significantly altered. HCQ inhibited B cell proliferation but not T cell proliferation in vitro PBMCs were stimulated with phytohemagglutinin (PHA) or monoclonal anti-CD40 with CpG-B to induce T cell and B cell proliferation, respectively, in the presence of a dose range of HCQ. No effect of HCQ was seen on T cell proliferation (Fig. A). Also, no effects were observed on T cell activation markers after PHA stimulation for 6 h (Figure ). At HCQ concentrations > 100 ng/mL, a decrease in B cell proliferation was observed, with an IC 50 of 1138 ng/mL (Fig. B). Clinical study Demographics and safety Of the 40 enrolled and randomized healthy subjects, 20 received a cumulative dose of 2400 mg HCQ in 5 days and 20 received placebo (Fig. ). The different age groups (18–30 and 65–75 years) were of equal size. Baseline characteristics are described in Table . All subjects completed their study treatment. One subject in the 65–75 years group erroneously took an additional 400 mg dose of HCQ on study day 2, after which the subject received 400 mg doses (once daily) for two consecutive days to not exceed the cumulative dose of 2400 mg. Treatment-emergent adverse events were transient of mild severity and did not lead to study discontinuation. Adverse events were reported more often by subjects in the active treatment arm (50%) compared to placebo (35%). Gastrointestinal complaints (20%) and dizziness (15%) were the most frequently reported adverse events in the active group. There were no findings of clinical concern following assessments of urinalysis, hematology and chemistry laboratory tests, vital signs, physical examination, and ECGs . Pharmacokinetics Mean HCQ concentration time profiles in plasma are depicted in Fig. A. Individual concentration profiles have been published previously . There were no significant differences in HCQ exposures between age groups (Fig. B). Mean concentrations measured 27 h after starting the treatment course (day 1, 121.0 ± 40.54 ng/mL) were in a similar range to those measured on the last day of the treatment course (day 4, 109.2 ± 35.59 ng/mL). Pharmacodynamics Hydroxychloroquine did not affect circulating immune cells The effects of HCQ on different circulating cell populations, both absolute as relative, were evaluated using flow cytometry. No apparent effects were seen on absolute values of total leukocytes, lymphocytes, monocytes, or neutrophils (Table ), as well as CD14 + monocytes, CD19 + B cells, CD3 + T cells, CD4 + T cells, and CD8 + T cells (Table ). Furthermore, no effects were seen on relative T cell populations (CD3 + ) in general, nor on subpopulations of T helper cells (CD4 +), cytotoxic T cells (CD8 + ), and regulatory T cells (CD4 + CD25 + CD127 − ). Similarly, no apparent treatment effects were observed in natural killer cells (CD56 + ), B cells (CD19 + ), and subpopulations of regulatory (CD5 + CD1d hi ), transitional (CD24 hi CD38 hi ), and antibody-secreting B cells (CD27 + CD38 + ). Moreover, also in classical (CD14 + ), nonclassical (CD16 + ), and intermediate (CD14 + CD16 + ) monocytes and plasmacytoid dendritic cells (pDCs, HLA-DR + CD14 − CD16 − CD123 + ), no differences were found between treatment groups. Also, between both age groups, no evident HCQ effects were observed (Table ). In vivo hydroxychloroquine suppressed IFNα secretion following TLR7 stimulation, but not after TLR3, TLR9, or RIG-I-like receptor stimulation To study the effects of HCQ on TLR/RIG-I-mediated IRF activation, PBMCs were stimulated with different endosomal TLR ligands: poly I:C (TLR3), imiquimod (TLR7), CpG class A (TLR9), and poly I:C lyovec (RIG-I). Overall, no HCQ effect was observed on IFNα responses (Fig. ), except for a significant suppression of IMQ-driven IFNα production (inhibition of − 48.2%, 95% CI − 72.1%– − 4.0%, p = 0.038). Poly I:C-driven IFNα release also appeared to be suppressed by HCQ, but not significantly (inhibition − 34.2%, 95% CI − 57.7%–7.5%, p = 0.091). No differences in HCQ effect on IFNα responses were observed between the young and elderly population (Figure ). In vivo hydroxychloroquine significantly suppressed IL-6 secretion after TLR7 stimulation, but not following TLR3, TLR9, or RIG-I-like receptor stimulation Activation of NFκB signaling via endosomal TLR and RIG-I-like ligands was assessed by measuring downstream IL-6 production (Fig. ). HCQ significantly suppressed IMQ-driven IL-6 production (inhibition of − 71.3%, 95% CI − 84.7%– − 46.1%, p = 0.0005). No significant HCQ effects were observed on IL-6 production driven by CpG A (TLR9) and poly I:C (TLR3) stimulations (inhibition of − 35.9%, 95% CI − 60. 3%–3.6%, p = 0.068, and − 37.7%, 95% CI − 62.6%–3.7%, p = 0.067, respectively). No differences in HCQ effect on IL-6 responses were observed between the young and elderly population (Figure ). In vivo hydroxychloroquine did not alter T cell activation To further investigate the potential immunomodulatory effect of HCQ on T cell activation , whole blood samples were incubated with PHA, which is known to induce a general T cell response . HCQ treatment did not modulate expression of T cell activation markers (CD25, CD69, CD71, and CD154) following PHA stimulation (Figure ). In addition, PHA-induced secretion of IL-2 and IFNγ was assessed; no apparent differences were observed between HCQ and placebo (Figure ). Hydroxychloroquine did not alter ex vivo B and T cell proliferation after in vivo administration Proliferative capability of B cells was assessed by stimulating PBMCs ex vivo with anti-CD40 mAbs + CpG B ODNs, a known stimulus for human B cell activation . Following stimulation of PBMCs, the percentage of proliferative B cells in the HCQ-treated group was similar to that of the placebo group (70.47% at day 4 for placebo, 70.03% for HCQ) (Fig. ). In addition, PBMCs were stimulated with PHA to induce T helper cell (CD4 + ) and cytotoxic T cell (CD8 + ) proliferation. Proliferation of both CD4 + and CD8 + cells was comparable between the HCQ- and placebo-treated group (> 95% for both groups for all time points for CD4, > 92% for both groups for all time points for CD8). No differences were observed for B and T cell proliferation in the separate age groups (Figure ). PBMCs were stimulated with endosomal TLR ligands in the presence of a dose range of HCQ for 24 h, and supernatants were analyzed for IRF-mediated IFNα and for NFκB-mediated IL-6 secretion. PBMCs were stimulated with different endosomal TLR ligands: poly I:C (TLR3), imiquimod (TLR7), CpG class A (TLR9), and poly I:C lyovec (RIG-I). HCQ dose-dependently inhibited endosomal TLR-induced IFNα and IL-6 secretion (Fig. ). Poly I:C-induced IFNα and IL-6 release was strongly suppressed at 10.000 ng/mL (IFNα: − 83.9%, IL-6: − 96.6%, IC 50 IL-6 = 637.2 ng/mL). Imiquimod (IMQ)-induced cytokine release was completely suppressed at the highest concentration (IFNα: − 96.3%, IL-6: − 96.3%, IC 50 IFNα: 695.8 ng/mL, IL-6: 237.9 ng/mL). The same was observed for stimulation with CpG class A, IFNα was suppressed by 99.6% with an IC 50 of 145.3 ng/mL, and IL-6 was suppressed by 96.4%, with an IC 50 of 86.9 ng/mL. The RIG-I response to poly I:C/lyovec was less affected by HCQ, while IFNα release was suppressed by 66.1% at 10,000 ng/mL HCQ; IL-6 release was not significantly altered. PBMCs were stimulated with phytohemagglutinin (PHA) or monoclonal anti-CD40 with CpG-B to induce T cell and B cell proliferation, respectively, in the presence of a dose range of HCQ. No effect of HCQ was seen on T cell proliferation (Fig. A). Also, no effects were observed on T cell activation markers after PHA stimulation for 6 h (Figure ). At HCQ concentrations > 100 ng/mL, a decrease in B cell proliferation was observed, with an IC 50 of 1138 ng/mL (Fig. B). Demographics and safety Of the 40 enrolled and randomized healthy subjects, 20 received a cumulative dose of 2400 mg HCQ in 5 days and 20 received placebo (Fig. ). The different age groups (18–30 and 65–75 years) were of equal size. Baseline characteristics are described in Table . All subjects completed their study treatment. One subject in the 65–75 years group erroneously took an additional 400 mg dose of HCQ on study day 2, after which the subject received 400 mg doses (once daily) for two consecutive days to not exceed the cumulative dose of 2400 mg. Treatment-emergent adverse events were transient of mild severity and did not lead to study discontinuation. Adverse events were reported more often by subjects in the active treatment arm (50%) compared to placebo (35%). Gastrointestinal complaints (20%) and dizziness (15%) were the most frequently reported adverse events in the active group. There were no findings of clinical concern following assessments of urinalysis, hematology and chemistry laboratory tests, vital signs, physical examination, and ECGs . Pharmacokinetics Mean HCQ concentration time profiles in plasma are depicted in Fig. A. Individual concentration profiles have been published previously . There were no significant differences in HCQ exposures between age groups (Fig. B). Mean concentrations measured 27 h after starting the treatment course (day 1, 121.0 ± 40.54 ng/mL) were in a similar range to those measured on the last day of the treatment course (day 4, 109.2 ± 35.59 ng/mL). Of the 40 enrolled and randomized healthy subjects, 20 received a cumulative dose of 2400 mg HCQ in 5 days and 20 received placebo (Fig. ). The different age groups (18–30 and 65–75 years) were of equal size. Baseline characteristics are described in Table . All subjects completed their study treatment. One subject in the 65–75 years group erroneously took an additional 400 mg dose of HCQ on study day 2, after which the subject received 400 mg doses (once daily) for two consecutive days to not exceed the cumulative dose of 2400 mg. Treatment-emergent adverse events were transient of mild severity and did not lead to study discontinuation. Adverse events were reported more often by subjects in the active treatment arm (50%) compared to placebo (35%). Gastrointestinal complaints (20%) and dizziness (15%) were the most frequently reported adverse events in the active group. There were no findings of clinical concern following assessments of urinalysis, hematology and chemistry laboratory tests, vital signs, physical examination, and ECGs . Mean HCQ concentration time profiles in plasma are depicted in Fig. A. Individual concentration profiles have been published previously . There were no significant differences in HCQ exposures between age groups (Fig. B). Mean concentrations measured 27 h after starting the treatment course (day 1, 121.0 ± 40.54 ng/mL) were in a similar range to those measured on the last day of the treatment course (day 4, 109.2 ± 35.59 ng/mL). Hydroxychloroquine did not affect circulating immune cells The effects of HCQ on different circulating cell populations, both absolute as relative, were evaluated using flow cytometry. No apparent effects were seen on absolute values of total leukocytes, lymphocytes, monocytes, or neutrophils (Table ), as well as CD14 + monocytes, CD19 + B cells, CD3 + T cells, CD4 + T cells, and CD8 + T cells (Table ). Furthermore, no effects were seen on relative T cell populations (CD3 + ) in general, nor on subpopulations of T helper cells (CD4 +), cytotoxic T cells (CD8 + ), and regulatory T cells (CD4 + CD25 + CD127 − ). Similarly, no apparent treatment effects were observed in natural killer cells (CD56 + ), B cells (CD19 + ), and subpopulations of regulatory (CD5 + CD1d hi ), transitional (CD24 hi CD38 hi ), and antibody-secreting B cells (CD27 + CD38 + ). Moreover, also in classical (CD14 + ), nonclassical (CD16 + ), and intermediate (CD14 + CD16 + ) monocytes and plasmacytoid dendritic cells (pDCs, HLA-DR + CD14 − CD16 − CD123 + ), no differences were found between treatment groups. Also, between both age groups, no evident HCQ effects were observed (Table ). In vivo hydroxychloroquine suppressed IFNα secretion following TLR7 stimulation, but not after TLR3, TLR9, or RIG-I-like receptor stimulation To study the effects of HCQ on TLR/RIG-I-mediated IRF activation, PBMCs were stimulated with different endosomal TLR ligands: poly I:C (TLR3), imiquimod (TLR7), CpG class A (TLR9), and poly I:C lyovec (RIG-I). Overall, no HCQ effect was observed on IFNα responses (Fig. ), except for a significant suppression of IMQ-driven IFNα production (inhibition of − 48.2%, 95% CI − 72.1%– − 4.0%, p = 0.038). Poly I:C-driven IFNα release also appeared to be suppressed by HCQ, but not significantly (inhibition − 34.2%, 95% CI − 57.7%–7.5%, p = 0.091). No differences in HCQ effect on IFNα responses were observed between the young and elderly population (Figure ). In vivo hydroxychloroquine significantly suppressed IL-6 secretion after TLR7 stimulation, but not following TLR3, TLR9, or RIG-I-like receptor stimulation Activation of NFκB signaling via endosomal TLR and RIG-I-like ligands was assessed by measuring downstream IL-6 production (Fig. ). HCQ significantly suppressed IMQ-driven IL-6 production (inhibition of − 71.3%, 95% CI − 84.7%– − 46.1%, p = 0.0005). No significant HCQ effects were observed on IL-6 production driven by CpG A (TLR9) and poly I:C (TLR3) stimulations (inhibition of − 35.9%, 95% CI − 60. 3%–3.6%, p = 0.068, and − 37.7%, 95% CI − 62.6%–3.7%, p = 0.067, respectively). No differences in HCQ effect on IL-6 responses were observed between the young and elderly population (Figure ). In vivo hydroxychloroquine did not alter T cell activation To further investigate the potential immunomodulatory effect of HCQ on T cell activation , whole blood samples were incubated with PHA, which is known to induce a general T cell response . HCQ treatment did not modulate expression of T cell activation markers (CD25, CD69, CD71, and CD154) following PHA stimulation (Figure ). In addition, PHA-induced secretion of IL-2 and IFNγ was assessed; no apparent differences were observed between HCQ and placebo (Figure ). Hydroxychloroquine did not alter ex vivo B and T cell proliferation after in vivo administration Proliferative capability of B cells was assessed by stimulating PBMCs ex vivo with anti-CD40 mAbs + CpG B ODNs, a known stimulus for human B cell activation . Following stimulation of PBMCs, the percentage of proliferative B cells in the HCQ-treated group was similar to that of the placebo group (70.47% at day 4 for placebo, 70.03% for HCQ) (Fig. ). In addition, PBMCs were stimulated with PHA to induce T helper cell (CD4 + ) and cytotoxic T cell (CD8 + ) proliferation. Proliferation of both CD4 + and CD8 + cells was comparable between the HCQ- and placebo-treated group (> 95% for both groups for all time points for CD4, > 92% for both groups for all time points for CD8). No differences were observed for B and T cell proliferation in the separate age groups (Figure ). The effects of HCQ on different circulating cell populations, both absolute as relative, were evaluated using flow cytometry. No apparent effects were seen on absolute values of total leukocytes, lymphocytes, monocytes, or neutrophils (Table ), as well as CD14 + monocytes, CD19 + B cells, CD3 + T cells, CD4 + T cells, and CD8 + T cells (Table ). Furthermore, no effects were seen on relative T cell populations (CD3 + ) in general, nor on subpopulations of T helper cells (CD4 +), cytotoxic T cells (CD8 + ), and regulatory T cells (CD4 + CD25 + CD127 − ). Similarly, no apparent treatment effects were observed in natural killer cells (CD56 + ), B cells (CD19 + ), and subpopulations of regulatory (CD5 + CD1d hi ), transitional (CD24 hi CD38 hi ), and antibody-secreting B cells (CD27 + CD38 + ). Moreover, also in classical (CD14 + ), nonclassical (CD16 + ), and intermediate (CD14 + CD16 + ) monocytes and plasmacytoid dendritic cells (pDCs, HLA-DR + CD14 − CD16 − CD123 + ), no differences were found between treatment groups. Also, between both age groups, no evident HCQ effects were observed (Table ). To study the effects of HCQ on TLR/RIG-I-mediated IRF activation, PBMCs were stimulated with different endosomal TLR ligands: poly I:C (TLR3), imiquimod (TLR7), CpG class A (TLR9), and poly I:C lyovec (RIG-I). Overall, no HCQ effect was observed on IFNα responses (Fig. ), except for a significant suppression of IMQ-driven IFNα production (inhibition of − 48.2%, 95% CI − 72.1%– − 4.0%, p = 0.038). Poly I:C-driven IFNα release also appeared to be suppressed by HCQ, but not significantly (inhibition − 34.2%, 95% CI − 57.7%–7.5%, p = 0.091). No differences in HCQ effect on IFNα responses were observed between the young and elderly population (Figure ). Activation of NFκB signaling via endosomal TLR and RIG-I-like ligands was assessed by measuring downstream IL-6 production (Fig. ). HCQ significantly suppressed IMQ-driven IL-6 production (inhibition of − 71.3%, 95% CI − 84.7%– − 46.1%, p = 0.0005). No significant HCQ effects were observed on IL-6 production driven by CpG A (TLR9) and poly I:C (TLR3) stimulations (inhibition of − 35.9%, 95% CI − 60. 3%–3.6%, p = 0.068, and − 37.7%, 95% CI − 62.6%–3.7%, p = 0.067, respectively). No differences in HCQ effect on IL-6 responses were observed between the young and elderly population (Figure ). To further investigate the potential immunomodulatory effect of HCQ on T cell activation , whole blood samples were incubated with PHA, which is known to induce a general T cell response . HCQ treatment did not modulate expression of T cell activation markers (CD25, CD69, CD71, and CD154) following PHA stimulation (Figure ). In addition, PHA-induced secretion of IL-2 and IFNγ was assessed; no apparent differences were observed between HCQ and placebo (Figure ). Proliferative capability of B cells was assessed by stimulating PBMCs ex vivo with anti-CD40 mAbs + CpG B ODNs, a known stimulus for human B cell activation . Following stimulation of PBMCs, the percentage of proliferative B cells in the HCQ-treated group was similar to that of the placebo group (70.47% at day 4 for placebo, 70.03% for HCQ) (Fig. ). In addition, PBMCs were stimulated with PHA to induce T helper cell (CD4 + ) and cytotoxic T cell (CD8 + ) proliferation. Proliferation of both CD4 + and CD8 + cells was comparable between the HCQ- and placebo-treated group (> 95% for both groups for all time points for CD4, > 92% for both groups for all time points for CD8). No differences were observed for B and T cell proliferation in the separate age groups (Figure ). Although HCQ is widely used for the treatment of autoimmune diseases, the exact mechanism behind its immunomodulatory properties remains unclear. In this study, we therefore aimed to quantify the immunosuppressive effect of HCQ by studying the endosomal TLR response and lymphocyte proliferation and activation both in in vitro experiments and in vivo in a randomized placebo-controlled trial in healthy volunteers. In our in vitro experiments, HCQ dose-dependently inhibited TLR3-, 7-, and 9-driven IL-6 and IFNα production, with profound effects at concentrations > 100 ng/mL. These findings are in line with literature on TLR signaling modulation by chloroquine . Limited data are available on the immunomodulatory effect of HCQ/chloroquine on RIG-I signaling . RIG-I functions as a cytosolic sensor of nucleic acids, inducing a type I IFN response after activation. HCQ inhibited the IFN responses in THP-1 cells transfected with RIG-I ligands , but this effect was not confirmed in cultures of human bronchial smooth muscle and epithelial cells . This is in line with the observations in the current study, which shows that HCQ only mildly modulated RIG-I-mediated IFNα production in PBMCs, without affecting IL-6 release. Our results suggest that HCQ has a profound effect on endo-lysosomal TLR functioning in vitro but affects the cytosolic RIG-I-mediated pathway to a lesser degree. This could be explained by HCQ’s excessive affinity to the lysosomal intracellular compartment (expected to be 56,000-fold higher than cytosol) . HCQ did not affect T cell activation in vitro. Although a dose-dependent inhibition of T cell proliferation by chloroquine following stimulation with anti-CD3/CD28 has been described , we did not see any inhibitory effect of HCQ on T cell proliferation or expression of activation markers in our in vitro experiments. This may be explained by the fact that a different and more potent stimulus was used in this study (PHA), which might be more difficult to suppress. For B cell proliferation, on the other hand, a dose-dependent HCQ-mediated inhibition was observed in vitro, confirming previous research . Although the HCQ-mediated inhibition was not as strong as the inhibition of cytokine production (IC 50 of 1138 ng/mL for B cell proliferation vs. 145–696 ng/mL for cytokine production), at concentrations > 100 ng/mL, a clear HCQ-mediated decrease in B cell proliferation was found. While HCQ had strong immunosuppressive effects in vitro, especially at high concentrations, less pronounced ex vivo effects of the compound were observed in our clinical study. Compared to placebo, 5-day HCQ treatment did not significantly suppress B cell proliferation or ex vivo TLR-driven IFNα and IL-6 secretion in PBMC cultures, except for a suppressive effect on TLR7-driven responses. The most likely explanation for this discrepancy between in vitro and ex vivo is that there was insufficient drug exposure at the evaluated HCQ dose and regimen in the clinical study. By using a 5-day dose regimen of HCQ (the recommended off-label dose for COVID-19 at the time of study conduct), an average maximum plasma concentration of 121 ng/mL was reached. This concentration is considerably lower than plasma levels found in RA patients receiving HCQ treatment of 200 mg daily for a longer time period, which ranges from 200 to 500 ng/mL . Peak exposures of 100–150 ng/mL from the clinical study translate into a maximal inhibitory effect of 20 to 50% in most cellular assays. In combination with the observed variability of the endpoints, such effects remain easily undetected. However, whole blood concentrations are expected to be approximately 2-to-sevenfold higher than plasma concentrations due to intracellular uptake in blood components , which would make the concentrations more in range with the in vitro experiments. Also, due to the large volume of distribution and the high HCQ tissue concentrations as compared to plasma , immunosuppressive effects in specific tissues may be significant. Moreover, HCQ has a gradual onset of action for HCQ and is biologically active even after drug discontinuation . This would mean that the five-day treatment that was used in the current study is insufficient to detect ex vivo drug effects. Other studies, for example, investigating HCQ effect in HIV patients , showed a discrepancy between plasma levels and drug efficacy. The widespread use of hydroxychloroquine following the onset of the COVID-19 pandemic was the reason to initiate our experiments. The initial off-label use of HCQ was primarily based on studies that assessed in vitro antiviral activity against SARS-CoV-2 . However, there is also a longstanding hypothesis that the immunomodulatory properties of chloroquine and HCQ could dampen immunopathology caused by viral infections such as influenza, Severe Acute Respiratory Syndrome (SARS), Middle East Respiratory Syndrome (MERS), and COVID-19 by suppressing the host immune response . Use of HCQ in COVID-19 patients did not show evident favorable effects for clinical endpoints such as mortality and mechanical ventilation for both prophylaxis and treatment . Our study provides mechanistic insight in the immunomodulatory effects of a HCQ dosing regimen that was used to treat COVID-19. We found that a 5-day treatment course of HCQ did not have extensive immunomodulatory effect in healthy individuals. HCQ treatment only significantly inhibited TLR7 responses. In theory, inhibition of the TLR7-mediated innate response to viral agents may be disadvantageous during the initial stages of viral infection . However, recent COVID-19 trials did not show an effect of HCQ treatment on disease incidence, and long-term HCQ use in rheumatoid arthritis is not associated with higher incidence of upper respiratory tract infections . In conclusion, we showed extensive and profound immunomodulation by HCQ in vitro; however, in a clinical study in healthy volunteers, the overall immunomodulatory effects of a 5-day HCQ treatment regimen of 2400 mg were limited. The pharmacological activity of HCQ in autoimmunity remains to be studied in greater detail, based on the assays as presented in our studies and at a therapeutic dose and regimen relevant for the condition of interest. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 487 KB)
Controlled Release of Multiple Therapeutics From Silicone Hydrogel Contact Lenses for Post-Cataract/Post-Refractive Surgery and Uveitis Treatment
d92adbbf-ac13-49ea-b785-b5e7ff607645
8662571
Ophthalmology[mh]
Eye disease affects quality of life worldwide, with more than one billion individuals worldwide having preventable or treatable vision impairment. According to the Lancet Global Health Commission, vision impairment results in an estimated $400 billion lost in economic productivity. There is a pressing unmet need for better and more efficient methods of treatment for ocular diseases that result in better patient outcomes and an increased quality of care. , The current state of the art for delivery of therapeutics to the eye is topical formulations in the form of solutions, suspensions, and ointments that currently account for more than 90% of the ophthalmic market. Topical formulations are an inefficient and inefficacious method of ocular drug delivery with several major issues. Patient compliance is a major issue with regard to variability in dose sizes as patients have been shown to miss doses and be unable to replicate the same drop angle, drop height, and squeeze force when administering a drop, resulting in variability in drop volume. Even when ideally administered, topical formulations suffer from low bioavailability with only 1–8% of the applied therapeutic able to penetrate the eye, with the remaining 92% entering systemic circulation. , The natural barriers within the eye prevent applied therapeutics from quickly penetrating the eye, whereas tear turnover results in medication being quickly washed out, limiting the effectiveness of topical formulations. , These issues with topical formulations compound with one another resulting in a significantly inefficient and inconsistent method of treatment. Since the inception of soft contact lenses, contact lenses that elute drugs have been studied as vehicles for a more effective method of treating ocular ailments because of their noninvasive nature and ability to partition molecules within the aqueous regions of the lens. To date, numerous attempts have been made to deliver therapeutics via hydrogel contact lenses, starting with drug loading via equilibrium partitioning of drug into a commercial lens. – Although this method is the easiest to implement, requiring only a pre-existing contact lens and drug solution, this method offers no control over release rate with no additional mechanism slowing drug release, and early results demonstrated release profiles similar to topical formulations. – Several methods have been attempted to decrease and control drug release rate including carrier-mediated release, – release with diffusion barriers, , and molecular imprinting. , Unfortunately, after more than 50 years of development, there is currently no commercially available contact lens drug delivery system, with many methods of loading and release failing to produce lenses that control and extend therapeutic release duration, maintain necessary physical properties of a contact lens, and deliver a therapeutically relevant amount of drug. In this article, we present novel extended-wear silicone hydrogel contact lenses that release multiple small molecule therapeutics from a single lens to be used to treat post-cataract, post-refractive surgery, uveitis, and corneal abrasion patients. The first lens system releases a nonsteroidal anti-inflammatory drug (NSAID) diclofenac sodium (DS) and a steroidal anti-inflammatory drug dexamethasone sodium phosphate (DMSP), and the second lens system releases an NSAID, bromfenac sodium (BS), and an antibiotic, moxifloxacin (MOX). These lens systems have significant applications as a dropless alternative for the treatment of ocular pain and inflammation and can be administered as a nonrefractive bandage or a refractive vision-corrective lens. Lenses possess the ability to be administered once a week and worn night and day continuously for seven days while releasing a consistent dose of therapeutic, offering better compliance than numerous topical drops and matching patient recall times. Replacing a lens each week also gives the clinician an opportunity to alter the dose. BS + MOX releasing lenses offer prophylaxis against infection. It also offers the potential benefits for effective treatment by controlling inflammation without compromising corneal endothelial regeneration/function or increasing intraocular pressure and pseudophakic cystoid macular edema by not using steroids. NSAID alone has been demonstrated to be moderately more effective at controlling postoperative inflammation after cataract surgery and more effective at preventing pseudophakic cystoid macular edema without increasing intraoperative pressure. Additionally, lenses that release only BS or NSAID/steroidal anti-inflammatory drug only offer the option for a single-dose antibiotic application via intracameral irrigation at the completion of surgery, which is becoming a more widely accepted standard of care and has been shown to reduce endophthalmitis risk by six to seven times. A lens wear time of one week/seven days matches clinician standard of care recall or patient follow-up with treatment durations of two to six weeks post-cataract, six to 10 weeks for anterior uveitis, one week for refractive surgery, and one week for corneal abrasion. Typical treatment is one week with laser-assisted surgery and superficial abrasions, but complications and deeper abrasions may require an additional week after follow-up. The controlled release rationale focuses on the use of a macromolecular memory strategy for drug loading and release. This method involves templating of the drug into the polymer network of the contact lens via addition of the drug to the prepolymer formulation with functional monomeric units that non-covalently bind the template drug. These monomer-drug complexes remain during the polymerization process, resulting in templating of the drug within the lens and formation of macromolecular memory sites. These sites offer strict control over loading, as well as release without negatively affecting physical properties of the lens such as oxygen transport, optical clarity, elastic modulus, and water content. Release control can be exhibited over a wide variety of template drugs via parameters such at the ratio of functional monomer to template (M/T ratio) and a diversity of crosslinker or functional monomer. This method of loading and release has been demonstrated by our laboratory to be effective for a wide range of molecules with a variety of different sizes and functionality. , – Synthesis of Silicone Hydrogel Contact Lenses Methacryloxypropyl terminated polydimethylsiloxane (DMS-R11) and methacryloxypropyl-tris-(trimethylsiloxy) silane (TRIS) were purchased from Gelest, Inc. (Morrisville, PA, USA). N,N dimethyl acrylamide (DMA), ethylene glycol dimethacrylate, polyethylene glycol (200) dimethacrylate (PEG200DMA), diethyl aminoethyl methacrylate (DEAEM), diallyl dimethyl ammonium chloride (DADMAC), acrylic acid (AA), methacrylic acid (MAA), dexamethasone sodium phosphate (DMSP), diclofenac sodium (DS), bromfenac sodium (BS), and moxifloxacin (MOX), ethanol, and 2-hydroxy-2-methylpropiophenone were purchased from VWR (Radnor, PA, USA). Silicone hydrogel contact lenses were synthesized using various mixtures of DMS-R11, TRIS, and DMA in addition to PEG200DMA, ethylene glycol dimethacrylate, DEAEM, AA, MAA, DADMAC, and ethanol with MOX, BS, DMSP, or DS added to the prepolymer formulation in various combinations. Photo-initiator 2-hydroxy-2-methylpropiophenone was added at a composition of <1% of total formulation. Monomers were added at various monomer-to-template (M/T) ratios for each drug equating to up to 10 mol% of total formulation. M/T ratio refers to the molar ratio of the functional monomer to the template drug and dictates the amount of drug added to the prepolymer formulation such that no more than 10 mol% of the total formulation is functional monomer. Functional monomers were selected based on their ability to noncovalently complex with drug molecules. DEAEM and DADMAC were selected due to their positive charge and ability to form ionic bonds with negatively charged template molecules whereas MAA and AA were chosen to form hydrogen bonds with templates molecules that did not possess a charge. M/T ratios were normalized to the highest M/T ratio among all formulations. Control lenses were synthesized using the same macromers and monomers but without addition of template drug to the pre-polymer formulation. The pre-polymer formulation was vortexed for approximately one minute and then sonicated for 30 minutes at room temperature to remove dissolved gases and ensure full dissolution of the template drug. A volume of 65 µL of the pre-polymer formulation was pipetted into polypropylene lens molds (dimensions swollen silicone lens 14.8 mm diameter, 8.4 base curve). Polymerization occurred via ultraviolet (UV) polymerization using an Omnicure S2000 (Excelitas Technologies Corp., Waltham, MA, USA), with an intensity of approximately 40 mW/cm 2 for a duration of two minutes. UV effects on the chemistry of loaded drugs was verified via 1 H-NMR (400 MHz, Agilent Technologies, Santa Clara, CA, USA) to ensure that UV polymerization did not affect the chemical structure. Mass of drug within the lens was determined via drug uptake and release experiments via mass balance. Template Drug Binding Studies All lenses were washed in 700 mL to 1 L of phosphate-buffered saline solution (PBS) in a Sotax AT Xtend Dissolution System (Sotax, Westborough, MA, USA) at 30 rpm. To verify washing, lenses were removed and placed in 2 mL of PBS and supernatant drug concentration was measured until no drug was observed releasing from the lens (lower limit of detection of ∼0.5 µg/mL). Lenses that displayed additional drug elution were placed back in the dissolution apparatus. Effectiveness of the wash was determined via mass balance analysis during washing and release based on the mass of drug loaded within the lens, with more than 95% of the loaded drug released during the washing process. Template binding studies were performed by placing washed lenses of different M/T ratios in 3 mL of 150 µg/mL drug solution (BS, DS, or DMSP, in DI water) until equilibrium was reached, which was verified experimentally. Equilibrium concentration of the supernatant was measured via UV/Vis spectrophotometry (280 nM) and used to determine mass uptake via mass balance. For dry lens mass, lenses were dried in a vacuum oven (T = 30°C, 28 in. Hg vacuum) until weight change was less than 0.1% and dry masses were measured. Normalized drug mass uptake (µg drug/mg polymer) was determined for each M/T ratio. Control lenses were synthesized, washed, and loaded via the same method as templated lenses and analyzed for drug mass uptake. Imprinting factor for lenses at each M/T ratio was calculated by dividing normalized drug mass uptake by normalized drug uptake observed in controls. In Vitro Physiological Flow Release Release studies were conducted via an in vitro physiological flow model using a microfluidic device . The device was produced using polydimethylsiloxane. A mixture of 10:1 ratio of Sylgard 184 Silicone base and curing agents was prepared and stirred for four minutes, then poured onto a glass plate within a circular mold. Two needles (1.27 mm outer diameter) were placed into the mold to create an inlet and outlet for flow, and a hemisphere on the glass plate (9.00 ± 0.10 mm radius of curvature) created a well in which contact lenses were placed during release. The device was then cured at 60°C for six hours. Drug loaded lenses were placed in the well of the device and fixed into position with a glass hemisphere (8.75 ± 0.10 mm radius of curvature). The microfluidic device was then sealed onto a glass plate using metal clamps. A kd Scientific Model 220 syringe pump (kd Scientific Inc., Holliston, MA, USA) was used to inject solution (DI water or PBS) at ambient temperature (25°C) through the device at a rate of 3 µL/min, simulating physiological tear flow. Before release analysis, lenses were fully washed until no additional drug was observed eluting from the lens and then reloaded with the template drugs. Release samples were collected and analyzed at different time intervals via HPLC (Waters Corp, Milford, MA, USA) with tandem UV/Vis detector at a wavelength of 280. HPLC conditions consisted of a C18 column (3.8 µm diameter; Waters Corp, Milford, MA, USA) and mobile phase of 50% acetonitrile and 50% aqueous (1% formic acid, v/v). Physical Property and Structural Analysis To determine optical transmittance, transmittance of visible light (450–700 nm) was measured through circular hydrogel lens segments, cut with a cork borer with a diameter of 1.5 mm. Each lens segment was placed in the bottom of a 96 well plate and hydrated in 200 µL of DI water along with a blank well containing only 200 µL of water, with care taken to ensure that there were no air bubbles present in any wells. Absorbance values of each well was measured in a Tecan Infinite M200 Pro spectrophotometer (Tecan, Männedorf, Switzerland), and absorbance values of blank wells were subtracted from wells containing lenses. Contact angle with water was measured via sessile drop contract angle goniometry (Theta Flex Tensiometer, Nanoscience Instruments, Phoenix, AZ, USA). Contact lenses were plasma coated in a SPI Plasma Prep III Plasma Cleaner (SPI supplies, West Chester PA, USA), and 5 mm circular cutouts were cut from the lenses with a cork borer. Using a micropipette, a water droplet was placed on the surface of cutouts and contact angle was measured. Elastic modulus was measured via synthesis of rectangular drug eluting silicone hydrogel sheets via UV photopolymerization using glass slides separated by 500 µm Teflon spacers. Dumbbell shaped tensile testing strips were cut from these sheets and analyzed for elastic modulus using a Shimadzu EZ-SX tensile tester (Shimadzu, Kyoto, Japan) at a gauge length of approximately 18 mm and stretched at a rate of 5 mm/min. Elastic modulus was determined by measuring the initial slope of the stress/strain curve. Hydrogels remained hydrated for the duration of the tests via aerosol diffusion of water. Edge-corrected Dk was calculated according to ISO 18369.4 (Ophthalmic Optics – Contact Lenses – Part 4: Physiochemical Properties of Contact Lens Materials). Lenses swollen in PBS were stacked to create polymers of different center thicknesses, measured using an electronic micrometer. Each lens or lens stack was placed on a polarographic oxygen sensor (Createch/Rehder Dev Co., Greenville, SC, USA) with 8.7 mm base curve and analyzed using a 201T oxygen permeameter. Equilibrium weight swelling ratio was determined by measuring the ratio of the swollen polymer weight to the dry weight. Synthesized lenses were dried until weight change was less than 0.1% in a vacuum oven and weight of the dried lenses was recorded. Lenses were swollen in DI water, and swollen mass was recorded. Equilibrium weight swelling ratio was calculated using the relationship: q = W S - W d W d where q is equilibrium weight swelling ratio, W s is weight of the swollen gel, and W d is weight of the dry gel. Equilibrium volume swelling ratio was determined by measuring the ratio of the swollen volume to the dry volume. Volume of dried and swollen gels were determined using Archimedes principle. Equilibrium volume swelling ratio was determined using the relationship: Q = 1 υ 2 , s = V S V d where Q is equilibrium volume swelling ratio, υ s is polymer volume fraction in the swollen state, V s is the volume of the swollen gel at equilibrium, and V d is the volume of the dry gel. The average molecular weight between crosslinks was calculated by analyzing tensile properties of synthesized polymers as well as polymer volume fractions. The relationship to calculate molecular weight between crosslinks was as follows: E = R T υ 2 , s 1 / 3 υ ¯ M ¯ c * 1 - 2 M ¯ c M n where E is the tensile modulus, R is the ideal gas constant, T is temperature, M n is the number average molecular weight of the polymer chains, υ 2,s is the polymer volume fraction in the swollen state, υ is the specific volume of the swollen polymer, and M ¯ c is the average molecular weight between crosslinks. Average molecular weight between crosslinks was used to calculate the average mesh size of synthesized polymers using the relationship: ξ = υ 2 , s - 1 / 3 2 C n M ¯ c M r 1 / 2 l where ξ is mesh size, υ 2,s is the polymer volume fraction in the swollen state, M r is molecular weight of the repeat unit, M ¯ c is the average molecular weight between crosslinks, C n is the Flory characteristic ratio, and l is the length of the bond along the polymer backbone. Average molecular weight between crosslinks and mesh size were normalized to the highest values for each among all formulations. Methacryloxypropyl terminated polydimethylsiloxane (DMS-R11) and methacryloxypropyl-tris-(trimethylsiloxy) silane (TRIS) were purchased from Gelest, Inc. (Morrisville, PA, USA). N,N dimethyl acrylamide (DMA), ethylene glycol dimethacrylate, polyethylene glycol (200) dimethacrylate (PEG200DMA), diethyl aminoethyl methacrylate (DEAEM), diallyl dimethyl ammonium chloride (DADMAC), acrylic acid (AA), methacrylic acid (MAA), dexamethasone sodium phosphate (DMSP), diclofenac sodium (DS), bromfenac sodium (BS), and moxifloxacin (MOX), ethanol, and 2-hydroxy-2-methylpropiophenone were purchased from VWR (Radnor, PA, USA). Silicone hydrogel contact lenses were synthesized using various mixtures of DMS-R11, TRIS, and DMA in addition to PEG200DMA, ethylene glycol dimethacrylate, DEAEM, AA, MAA, DADMAC, and ethanol with MOX, BS, DMSP, or DS added to the prepolymer formulation in various combinations. Photo-initiator 2-hydroxy-2-methylpropiophenone was added at a composition of <1% of total formulation. Monomers were added at various monomer-to-template (M/T) ratios for each drug equating to up to 10 mol% of total formulation. M/T ratio refers to the molar ratio of the functional monomer to the template drug and dictates the amount of drug added to the prepolymer formulation such that no more than 10 mol% of the total formulation is functional monomer. Functional monomers were selected based on their ability to noncovalently complex with drug molecules. DEAEM and DADMAC were selected due to their positive charge and ability to form ionic bonds with negatively charged template molecules whereas MAA and AA were chosen to form hydrogen bonds with templates molecules that did not possess a charge. M/T ratios were normalized to the highest M/T ratio among all formulations. Control lenses were synthesized using the same macromers and monomers but without addition of template drug to the pre-polymer formulation. The pre-polymer formulation was vortexed for approximately one minute and then sonicated for 30 minutes at room temperature to remove dissolved gases and ensure full dissolution of the template drug. A volume of 65 µL of the pre-polymer formulation was pipetted into polypropylene lens molds (dimensions swollen silicone lens 14.8 mm diameter, 8.4 base curve). Polymerization occurred via ultraviolet (UV) polymerization using an Omnicure S2000 (Excelitas Technologies Corp., Waltham, MA, USA), with an intensity of approximately 40 mW/cm 2 for a duration of two minutes. UV effects on the chemistry of loaded drugs was verified via 1 H-NMR (400 MHz, Agilent Technologies, Santa Clara, CA, USA) to ensure that UV polymerization did not affect the chemical structure. Mass of drug within the lens was determined via drug uptake and release experiments via mass balance. All lenses were washed in 700 mL to 1 L of phosphate-buffered saline solution (PBS) in a Sotax AT Xtend Dissolution System (Sotax, Westborough, MA, USA) at 30 rpm. To verify washing, lenses were removed and placed in 2 mL of PBS and supernatant drug concentration was measured until no drug was observed releasing from the lens (lower limit of detection of ∼0.5 µg/mL). Lenses that displayed additional drug elution were placed back in the dissolution apparatus. Effectiveness of the wash was determined via mass balance analysis during washing and release based on the mass of drug loaded within the lens, with more than 95% of the loaded drug released during the washing process. Template binding studies were performed by placing washed lenses of different M/T ratios in 3 mL of 150 µg/mL drug solution (BS, DS, or DMSP, in DI water) until equilibrium was reached, which was verified experimentally. Equilibrium concentration of the supernatant was measured via UV/Vis spectrophotometry (280 nM) and used to determine mass uptake via mass balance. For dry lens mass, lenses were dried in a vacuum oven (T = 30°C, 28 in. Hg vacuum) until weight change was less than 0.1% and dry masses were measured. Normalized drug mass uptake (µg drug/mg polymer) was determined for each M/T ratio. Control lenses were synthesized, washed, and loaded via the same method as templated lenses and analyzed for drug mass uptake. Imprinting factor for lenses at each M/T ratio was calculated by dividing normalized drug mass uptake by normalized drug uptake observed in controls. Release studies were conducted via an in vitro physiological flow model using a microfluidic device . The device was produced using polydimethylsiloxane. A mixture of 10:1 ratio of Sylgard 184 Silicone base and curing agents was prepared and stirred for four minutes, then poured onto a glass plate within a circular mold. Two needles (1.27 mm outer diameter) were placed into the mold to create an inlet and outlet for flow, and a hemisphere on the glass plate (9.00 ± 0.10 mm radius of curvature) created a well in which contact lenses were placed during release. The device was then cured at 60°C for six hours. Drug loaded lenses were placed in the well of the device and fixed into position with a glass hemisphere (8.75 ± 0.10 mm radius of curvature). The microfluidic device was then sealed onto a glass plate using metal clamps. A kd Scientific Model 220 syringe pump (kd Scientific Inc., Holliston, MA, USA) was used to inject solution (DI water or PBS) at ambient temperature (25°C) through the device at a rate of 3 µL/min, simulating physiological tear flow. Before release analysis, lenses were fully washed until no additional drug was observed eluting from the lens and then reloaded with the template drugs. Release samples were collected and analyzed at different time intervals via HPLC (Waters Corp, Milford, MA, USA) with tandem UV/Vis detector at a wavelength of 280. HPLC conditions consisted of a C18 column (3.8 µm diameter; Waters Corp, Milford, MA, USA) and mobile phase of 50% acetonitrile and 50% aqueous (1% formic acid, v/v). To determine optical transmittance, transmittance of visible light (450–700 nm) was measured through circular hydrogel lens segments, cut with a cork borer with a diameter of 1.5 mm. Each lens segment was placed in the bottom of a 96 well plate and hydrated in 200 µL of DI water along with a blank well containing only 200 µL of water, with care taken to ensure that there were no air bubbles present in any wells. Absorbance values of each well was measured in a Tecan Infinite M200 Pro spectrophotometer (Tecan, Männedorf, Switzerland), and absorbance values of blank wells were subtracted from wells containing lenses. Contact angle with water was measured via sessile drop contract angle goniometry (Theta Flex Tensiometer, Nanoscience Instruments, Phoenix, AZ, USA). Contact lenses were plasma coated in a SPI Plasma Prep III Plasma Cleaner (SPI supplies, West Chester PA, USA), and 5 mm circular cutouts were cut from the lenses with a cork borer. Using a micropipette, a water droplet was placed on the surface of cutouts and contact angle was measured. Elastic modulus was measured via synthesis of rectangular drug eluting silicone hydrogel sheets via UV photopolymerization using glass slides separated by 500 µm Teflon spacers. Dumbbell shaped tensile testing strips were cut from these sheets and analyzed for elastic modulus using a Shimadzu EZ-SX tensile tester (Shimadzu, Kyoto, Japan) at a gauge length of approximately 18 mm and stretched at a rate of 5 mm/min. Elastic modulus was determined by measuring the initial slope of the stress/strain curve. Hydrogels remained hydrated for the duration of the tests via aerosol diffusion of water. Edge-corrected Dk was calculated according to ISO 18369.4 (Ophthalmic Optics – Contact Lenses – Part 4: Physiochemical Properties of Contact Lens Materials). Lenses swollen in PBS were stacked to create polymers of different center thicknesses, measured using an electronic micrometer. Each lens or lens stack was placed on a polarographic oxygen sensor (Createch/Rehder Dev Co., Greenville, SC, USA) with 8.7 mm base curve and analyzed using a 201T oxygen permeameter. Equilibrium weight swelling ratio was determined by measuring the ratio of the swollen polymer weight to the dry weight. Synthesized lenses were dried until weight change was less than 0.1% in a vacuum oven and weight of the dried lenses was recorded. Lenses were swollen in DI water, and swollen mass was recorded. Equilibrium weight swelling ratio was calculated using the relationship: q = W S - W d W d where q is equilibrium weight swelling ratio, W s is weight of the swollen gel, and W d is weight of the dry gel. Equilibrium volume swelling ratio was determined by measuring the ratio of the swollen volume to the dry volume. Volume of dried and swollen gels were determined using Archimedes principle. Equilibrium volume swelling ratio was determined using the relationship: Q = 1 υ 2 , s = V S V d where Q is equilibrium volume swelling ratio, υ s is polymer volume fraction in the swollen state, V s is the volume of the swollen gel at equilibrium, and V d is the volume of the dry gel. The average molecular weight between crosslinks was calculated by analyzing tensile properties of synthesized polymers as well as polymer volume fractions. The relationship to calculate molecular weight between crosslinks was as follows: E = R T υ 2 , s 1 / 3 υ ¯ M ¯ c * 1 - 2 M ¯ c M n where E is the tensile modulus, R is the ideal gas constant, T is temperature, M n is the number average molecular weight of the polymer chains, υ 2,s is the polymer volume fraction in the swollen state, υ is the specific volume of the swollen polymer, and M ¯ c is the average molecular weight between crosslinks. Average molecular weight between crosslinks was used to calculate the average mesh size of synthesized polymers using the relationship: ξ = υ 2 , s - 1 / 3 2 C n M ¯ c M r 1 / 2 l where ξ is mesh size, υ 2,s is the polymer volume fraction in the swollen state, M r is molecular weight of the repeat unit, M ¯ c is the average molecular weight between crosslinks, C n is the Flory characteristic ratio, and l is the length of the bond along the polymer backbone. Average molecular weight between crosslinks and mesh size were normalized to the highest values for each among all formulations. Template Drug Binding Studies Drug molecules added within the prepolymer formulation are hypothesized to complex with functional monomers, beginning the templating process. During polymerization, these complexes are hypothesized to create complexation points within multiple polymer chains which form macromolecular memory sites within the polymer structure. Drug reloading, dynamic release experiments, and network structural analysis have been shown by our group to validate the hypothesis with various drugs. , – Equilibrium mass binding of DMSP, DS, and BS in templated silicone hydrogel contact lenses at different M/T ratios and controls are shown in . DMSP templated lenses demonstrated equilibrium binding values of 2.1 ± 0.1 µg drug /mg polymer , 3.7 ± 0.2 µg drug /mg polymer , and 9.9 ± 1.3 µg drug /mg polymer corresponding with normalized M/T ratios of 0.1, 0.3, and 0.6, respectively. Imprinting factor for DMSP templated lenses synthesized at M/T ratios of 0.1, 0.3, and 0.6 were 1.3 ± 0.1, 3.2 ± 0.1, and 6.6 ± 0.1 respectively, demonstrating an increase in drug binding compared to controls and supporting the hypothesis that macromolecular memory sites within the lens lead to an increase in drug uptake. DS templated lenses at different M/T ratios demonstrated equilibrium binding values of 4.9 ± 0.3 µg drug /mg polymer , 20.6 ± 0.3 µg drug /mg polymer , and 24.7 ± 0.5 µg drug /mg polymer corresponding with normalized M/T ratios of 0.1, 0.3, and 0.6 respectively and imprinting factors of 1.0 ± 0.1, 6.7 ± 0.2, and 6.1 ± 0.2, respectively. Equilibrium mass binding of BS in BS templated lenses with M/T ratios of 0.1, 0.3, and 0.6 were 1.3 ± 0.2 µg drug /µg polymer , 11.6 ± 0.5 µg drug /µg polymer, and 18.6 ± 3.7 µg drug /µg polymer , respectively, corresponding with imprinting factors of 0.9 ± 0.3, 5.1 ± 0.4, and 7.9 ± 0.3, respectively. Equilibrium binding results for DS, DMSP, and BS demonstrated an increased drug uptake as M/T ratio increased. Controls demonstrated the lowest drug binding whereas the highest M/T ratios demonstrated the highest drug binding, with higher M/T ratios binding significantly more mass than controls synthesized with the same mol% of functional monomer. These results support the hypothesis that macromolecular memory sites lead to a higher drug uptake and increasing functionality within the lens leads to a higher degree of macromolecular memory site formation in lenses loaded via the templating process as the template drug. Controls in this study contained functionality that matched the template drug at the same concentration as templated lenses, with the only difference being the absence of template drug in the prepolymer formulation in controls. This suggests that the templating process leads to macromolecular memory site formation, which enhances drug uptake rather than only the presence of functional chemistry that interacts with the template drug. In Vitro Physiological Flow Release Release via the microfluidic physiological flow device has been demonstrated by our lab to be a more effective method for correlation of in vitro results to in vivo. , , Release via the microfluidic device more accurately replicates volume and flow dynamics within the tear film to more accurately predict in vivo drug release behavior of drug loaded lenses. Release results of BS loaded templated lenses synthesized at normalized M/T ratios of 1.0 and 0.12 are demonstrated in A. Lenses synthesized at an M/T ratio of 0.12 released their drug payload in 14 days whereas lenses synthesized at an M/T ratio of 1.0 extended release up to 35 days, supporting the hypothesis that an increase in functionality within the lens led to an increase in memory site formation during synthesis, resulting in a decreased release rate. Average mass released from lenses synthesized with an M/T ratio of 0.12 was 4.6 ± 0.2 µg/d, whereas average mass release from 1.0 M/T lenses was 4.4 ± 0.1 µg/d. B shows in vitro microfluidic fractional dual release of DS and DMSP from DS + DMSP templated lenses and controls. Release of both DS and DMSP from control lenses occurred rapidly, with approximately 85% of the drug payload within the first day. By the second day, more than 95% of loaded DMSP was released, with the remaining small amount of drug (<5%) released by the following day. Approximately 90% of loaded DS had been released by day 2 with the remaining 10% released over the following two days. Drug release profiles from controls are expected to be slightly better than soaking commercial lenses, as controls contain functional monomers that non-covalently interact with the template drug but lack hypothesized polymer chain templating organization formed in presence of drug. Lenses synthesized with the templating process extended release of both DS and DMSP to over seven days and shifted the release curve downward toward a more constant release rate. Dual release of BS and MOX from lenses synthesized with the templating process and controls are shown in C. Lenses synthesized using the templating process showed MOX release for eight days and BS release for 11 days. Controls demonstrated a faster release of MOX, with ∼40% of the payload released within the first day and the majority released before day 4. Controls demonstrated 11 day release of BS, at a rate shifted to the left of templated lenses signifying a release profile that is less controlled and concentration dependent (further from zero order controlled release). Templated DMSP + DS loaded lenses released DMSP and DS at an average rate of 6.8 ± 1.9 µg/d and 11.4 ± 2.8 µg/d, respectively, whereas templated BS + MOX loaded lenses released BS at an average rate of 28.2 ± 8.6 µg/d and MOX at an average rate of 14.0 ± 5.0 µg/d. DMSP topical drops (0.1%, Maxidex) are administered four to six times daily, and DS topical drops (0.1%, Voltaren) are administered 4 times daily. Assuming a drop volume of 50 µL, each drop delivers approximately 50 µg of medication, resulting in 200 µg/d of applied DS and 200 µg/d of applied DMSP (4 drops/d). For topical drops, approximately 92% of the applied therapeutic is lost due to tear turnover, , resulting in an estimated therapeutic dosage of 16 µg/d of both DS and DMSP. Moxifloxacin topical drops (0.5%, Vigamox) are administered once daily, resulting in 500 µg/d of applied moxifloxacin and an estimated 40 µg/d dosage taking tear turnover into account. Bromfenac topical drops (0.09%, Xibrom) are administered twice daily, resulting in 90 µg/d of applied bromfenac and an estimated 7.2 µg/d dosage considering tear turnover. Release rates from therapeutic lenses approximates the expected therapeutic dosage of topical drops, however via alteration of the M/T ratio, the release rate can be tailored to achieve a different dosage. , Furthermore, it has been demonstrated that with a controlled release strategy, where lens release rate approaches absorption rate into tissue, losses of drug due to tear turnover are substantially reduced. Results from drug reloading and release analysis support the hypothesis that synthesizing lenses in presence of drug molecules and monomers with functional chemistry with affinity for the template drug resulted in an increase in drug binding and a slower, more controlled release. These results suggest that the templating process led to formation of macromolecular memory sites within synthesized lenses that delayed release and increased drug binding compared to controls. Results from BS release at different M/T ratios suggests that increasing functionality within the lens led to a greater degree of memory site formation which led to an increased release duration. 1 H-NMR analysis demonstrated no difference in chemical structure between template drugs that had been subjected to UV polymerization and release from therapeutic lenses and drugs measured without any modification. Physical Property and Structural Analysis Measured physical properties of DS + DMSP loaded lenses and BS + MOX loaded lenses are presented in the . Elastic modulus of DMSP + DS loaded lenses was 3.4 ± 0.6 MPa. Elastic modulus of BS + MOX loaded lenses was 2.1 ± 0.5 MPa. Elastic modulus of silicone hydrogel contact lenses generally ranges from 0.3 to 1.9 MPa and is a tailorable property that can be adjusted by adjusting the amount of base monomeric units, using a longer chain silicone macromer unit, or using longer crosslinking units that allow for a more flexible polymer network. Contact angle of with water of DS + DMSO loaded lenses was determined to be 16.4° ± 3.1°, meeting the commercial standard for contact lenses of <100°. BS + MOX loaded lenses also met this commercial standard, displaying a contact angle with water of be 22.6° ± 1.2°. Oxygen permeability (Dk) analysis resulted in a Dk of 83 barrer (95% Confidence Limit (CL): 70–101) or 83 × 10 –11 (cm 2 /sec)(ml O 2 /ml × mm Hg) at 35°C (Dk intrinsic) in DS + DMSO loaded lenses and 70 barrer (95% CL: 53–103) at 35°C in BS + MOX loaded lenses. These values fall within the range of extended-wear silicone hydrogel lenses on the market today (60–175). Light transmittance through DS + DMSP loaded lenses and BS + MOX loaded lenses was ≥ 96% @ 610 nm and greater than 90% across the visible spectrum, indicating that all lenses were optically clear . Equilibrium weight swelling ratios of lenses loaded with DS + DMSP was 0.29 ± 0.09 compared to 0.18 ± 0.03 in controls and 0.20 ± 0.03 in templated lenses loaded with BS + MOX compared to 0.23 ± 0.05 in controls , fitting within the acceptable range for silicone hydrogel contact lenses. Polymer volume fraction in the swollen state of DS + DMSP templated lenses was 0.86 ± 0.03 compared to 0.86 ± 0.05 in controls and 0.86 ± 0.02 in BS + MOX templated lenses compared to 0.87 ± 0.03 in controls. Normalized average molecular weight between crosslinks and mesh size of DS + DMSP templated lenses at an M/T ratio of 0.2 and corresponding controls, as well as BS + MOX templated lenses at an M/T ratio of 0.2 and corresponding controls are highlighted in . Structural analysis indicated that for both BS + MOX templated lenses and DS + DMSP templated lenses, lenses synthesized with the templating process had a mesh size that was not statistically different than controls. These results suggest that formation of macromolecular memory sites lead to extended release and increased drug loading in templated lenses rather than a tighter polymer architecture or smaller mesh size. Drug molecules added within the prepolymer formulation are hypothesized to complex with functional monomers, beginning the templating process. During polymerization, these complexes are hypothesized to create complexation points within multiple polymer chains which form macromolecular memory sites within the polymer structure. Drug reloading, dynamic release experiments, and network structural analysis have been shown by our group to validate the hypothesis with various drugs. , – Equilibrium mass binding of DMSP, DS, and BS in templated silicone hydrogel contact lenses at different M/T ratios and controls are shown in . DMSP templated lenses demonstrated equilibrium binding values of 2.1 ± 0.1 µg drug /mg polymer , 3.7 ± 0.2 µg drug /mg polymer , and 9.9 ± 1.3 µg drug /mg polymer corresponding with normalized M/T ratios of 0.1, 0.3, and 0.6, respectively. Imprinting factor for DMSP templated lenses synthesized at M/T ratios of 0.1, 0.3, and 0.6 were 1.3 ± 0.1, 3.2 ± 0.1, and 6.6 ± 0.1 respectively, demonstrating an increase in drug binding compared to controls and supporting the hypothesis that macromolecular memory sites within the lens lead to an increase in drug uptake. DS templated lenses at different M/T ratios demonstrated equilibrium binding values of 4.9 ± 0.3 µg drug /mg polymer , 20.6 ± 0.3 µg drug /mg polymer , and 24.7 ± 0.5 µg drug /mg polymer corresponding with normalized M/T ratios of 0.1, 0.3, and 0.6 respectively and imprinting factors of 1.0 ± 0.1, 6.7 ± 0.2, and 6.1 ± 0.2, respectively. Equilibrium mass binding of BS in BS templated lenses with M/T ratios of 0.1, 0.3, and 0.6 were 1.3 ± 0.2 µg drug /µg polymer , 11.6 ± 0.5 µg drug /µg polymer, and 18.6 ± 3.7 µg drug /µg polymer , respectively, corresponding with imprinting factors of 0.9 ± 0.3, 5.1 ± 0.4, and 7.9 ± 0.3, respectively. Equilibrium binding results for DS, DMSP, and BS demonstrated an increased drug uptake as M/T ratio increased. Controls demonstrated the lowest drug binding whereas the highest M/T ratios demonstrated the highest drug binding, with higher M/T ratios binding significantly more mass than controls synthesized with the same mol% of functional monomer. These results support the hypothesis that macromolecular memory sites lead to a higher drug uptake and increasing functionality within the lens leads to a higher degree of macromolecular memory site formation in lenses loaded via the templating process as the template drug. Controls in this study contained functionality that matched the template drug at the same concentration as templated lenses, with the only difference being the absence of template drug in the prepolymer formulation in controls. This suggests that the templating process leads to macromolecular memory site formation, which enhances drug uptake rather than only the presence of functional chemistry that interacts with the template drug. Release via the microfluidic physiological flow device has been demonstrated by our lab to be a more effective method for correlation of in vitro results to in vivo. , , Release via the microfluidic device more accurately replicates volume and flow dynamics within the tear film to more accurately predict in vivo drug release behavior of drug loaded lenses. Release results of BS loaded templated lenses synthesized at normalized M/T ratios of 1.0 and 0.12 are demonstrated in A. Lenses synthesized at an M/T ratio of 0.12 released their drug payload in 14 days whereas lenses synthesized at an M/T ratio of 1.0 extended release up to 35 days, supporting the hypothesis that an increase in functionality within the lens led to an increase in memory site formation during synthesis, resulting in a decreased release rate. Average mass released from lenses synthesized with an M/T ratio of 0.12 was 4.6 ± 0.2 µg/d, whereas average mass release from 1.0 M/T lenses was 4.4 ± 0.1 µg/d. B shows in vitro microfluidic fractional dual release of DS and DMSP from DS + DMSP templated lenses and controls. Release of both DS and DMSP from control lenses occurred rapidly, with approximately 85% of the drug payload within the first day. By the second day, more than 95% of loaded DMSP was released, with the remaining small amount of drug (<5%) released by the following day. Approximately 90% of loaded DS had been released by day 2 with the remaining 10% released over the following two days. Drug release profiles from controls are expected to be slightly better than soaking commercial lenses, as controls contain functional monomers that non-covalently interact with the template drug but lack hypothesized polymer chain templating organization formed in presence of drug. Lenses synthesized with the templating process extended release of both DS and DMSP to over seven days and shifted the release curve downward toward a more constant release rate. Dual release of BS and MOX from lenses synthesized with the templating process and controls are shown in C. Lenses synthesized using the templating process showed MOX release for eight days and BS release for 11 days. Controls demonstrated a faster release of MOX, with ∼40% of the payload released within the first day and the majority released before day 4. Controls demonstrated 11 day release of BS, at a rate shifted to the left of templated lenses signifying a release profile that is less controlled and concentration dependent (further from zero order controlled release). Templated DMSP + DS loaded lenses released DMSP and DS at an average rate of 6.8 ± 1.9 µg/d and 11.4 ± 2.8 µg/d, respectively, whereas templated BS + MOX loaded lenses released BS at an average rate of 28.2 ± 8.6 µg/d and MOX at an average rate of 14.0 ± 5.0 µg/d. DMSP topical drops (0.1%, Maxidex) are administered four to six times daily, and DS topical drops (0.1%, Voltaren) are administered 4 times daily. Assuming a drop volume of 50 µL, each drop delivers approximately 50 µg of medication, resulting in 200 µg/d of applied DS and 200 µg/d of applied DMSP (4 drops/d). For topical drops, approximately 92% of the applied therapeutic is lost due to tear turnover, , resulting in an estimated therapeutic dosage of 16 µg/d of both DS and DMSP. Moxifloxacin topical drops (0.5%, Vigamox) are administered once daily, resulting in 500 µg/d of applied moxifloxacin and an estimated 40 µg/d dosage taking tear turnover into account. Bromfenac topical drops (0.09%, Xibrom) are administered twice daily, resulting in 90 µg/d of applied bromfenac and an estimated 7.2 µg/d dosage considering tear turnover. Release rates from therapeutic lenses approximates the expected therapeutic dosage of topical drops, however via alteration of the M/T ratio, the release rate can be tailored to achieve a different dosage. , Furthermore, it has been demonstrated that with a controlled release strategy, where lens release rate approaches absorption rate into tissue, losses of drug due to tear turnover are substantially reduced. Results from drug reloading and release analysis support the hypothesis that synthesizing lenses in presence of drug molecules and monomers with functional chemistry with affinity for the template drug resulted in an increase in drug binding and a slower, more controlled release. These results suggest that the templating process led to formation of macromolecular memory sites within synthesized lenses that delayed release and increased drug binding compared to controls. Results from BS release at different M/T ratios suggests that increasing functionality within the lens led to a greater degree of memory site formation which led to an increased release duration. 1 H-NMR analysis demonstrated no difference in chemical structure between template drugs that had been subjected to UV polymerization and release from therapeutic lenses and drugs measured without any modification. Measured physical properties of DS + DMSP loaded lenses and BS + MOX loaded lenses are presented in the . Elastic modulus of DMSP + DS loaded lenses was 3.4 ± 0.6 MPa. Elastic modulus of BS + MOX loaded lenses was 2.1 ± 0.5 MPa. Elastic modulus of silicone hydrogel contact lenses generally ranges from 0.3 to 1.9 MPa and is a tailorable property that can be adjusted by adjusting the amount of base monomeric units, using a longer chain silicone macromer unit, or using longer crosslinking units that allow for a more flexible polymer network. Contact angle of with water of DS + DMSO loaded lenses was determined to be 16.4° ± 3.1°, meeting the commercial standard for contact lenses of <100°. BS + MOX loaded lenses also met this commercial standard, displaying a contact angle with water of be 22.6° ± 1.2°. Oxygen permeability (Dk) analysis resulted in a Dk of 83 barrer (95% Confidence Limit (CL): 70–101) or 83 × 10 –11 (cm 2 /sec)(ml O 2 /ml × mm Hg) at 35°C (Dk intrinsic) in DS + DMSO loaded lenses and 70 barrer (95% CL: 53–103) at 35°C in BS + MOX loaded lenses. These values fall within the range of extended-wear silicone hydrogel lenses on the market today (60–175). Light transmittance through DS + DMSP loaded lenses and BS + MOX loaded lenses was ≥ 96% @ 610 nm and greater than 90% across the visible spectrum, indicating that all lenses were optically clear . Equilibrium weight swelling ratios of lenses loaded with DS + DMSP was 0.29 ± 0.09 compared to 0.18 ± 0.03 in controls and 0.20 ± 0.03 in templated lenses loaded with BS + MOX compared to 0.23 ± 0.05 in controls , fitting within the acceptable range for silicone hydrogel contact lenses. Polymer volume fraction in the swollen state of DS + DMSP templated lenses was 0.86 ± 0.03 compared to 0.86 ± 0.05 in controls and 0.86 ± 0.02 in BS + MOX templated lenses compared to 0.87 ± 0.03 in controls. Normalized average molecular weight between crosslinks and mesh size of DS + DMSP templated lenses at an M/T ratio of 0.2 and corresponding controls, as well as BS + MOX templated lenses at an M/T ratio of 0.2 and corresponding controls are highlighted in . Structural analysis indicated that for both BS + MOX templated lenses and DS + DMSP templated lenses, lenses synthesized with the templating process had a mesh size that was not statistically different than controls. These results suggest that formation of macromolecular memory sites lead to extended release and increased drug loading in templated lenses rather than a tighter polymer architecture or smaller mesh size. In this work, we have demonstrated dual release of diclofenac sodium + dexamethasone sodium phosphate and dual release of bromfenac sodium + moxifloxacin from silicone hydrogel contact lenses. DS + DMSP templated lenses were able to extend release of each therapeutic to over seven days at a consistent rate compared to controls that delivered over 85% of their loaded drug within the first day. Lenses delivered a therapeutically relevant amount of both DS and DMSP, equating to approximately two topical drops worth of DMSP and four drops of DS continuously each day for the duration of release. Lenses synthesized using the templating process displayed significantly increased drug uptake compared to controls, suggesting successful creation of macromolecular memory sites and increase in memory site formation as M/T ratio increased. The hypothesis that the templating process leads to formation of macromolecular memory sites was further supported by structural analysis of templated lenses and controls, which demonstrated statistically similar mesh size, average molecular weight between crosslinks, and polymer volume fraction. BS + MOX templated lenses demonstrated an extension of MOX release from five to eight days and a decrease in release rate of BS compared to controls. Formation of macromolecular memory sites in BS loaded lenses was supported by several different studies. Drug uptake studies demonstrated a significant increase in BS uptake in templated lenses compared to controls and an increase in BS uptake as M/T ratio increased. Release studies from lenses templated in BS demonstrated an increase in release duration from 14 days to 35 days as M/T ratio increased from 0.12 to 1.0, suggesting that the increased amount of functional chemistry during the templating process led to an increase in memory site formation. Structural analysis indicated a statistically similar mesh size and polymer volume fraction to controls, suggesting that extended and controlled release was driven by macromolecular memory rather than a tighter polymer mesh. The lenses demonstrated in this study have significant clinical interest as seven or more days’ treatment of anterior uveitis and post-ocular surgery pain, inflammation and infection. The ability of lenses to control and extend the release of multiple molecules at the same time has significant potential for treatment of multiple symptoms using a single lens and targeting multiple propagators of ocular inflammation with a single lens. This technology has the potential to replace topical formulations as a more consistent and more efficacious method of ocular drug delivery, taking dosing out of the patients’ hands, as well as delivering a consistent amount of drug for the duration of treatment, leading to better patient outcomes.
Yeasts Prefer Daycares and Molds Prefer Private Homes
9a7f5c26-13f4-436d-bb05-5ab430900aaa
11842513
Microbiology[mh]
Within buildings, conditions for microbial growth are generally harsh due to limited humidity and scarce nutrient availability. However, some microorganisms are adapted to these adverse conditions and can grow and proliferate indoors. Molds and yeasts, both polyphyletic assemblages representing different fungal growth forms, are especially tolerant for the harsh indoor conditions and are often found in surveys of indoor fungal communities . Molds are known to affect our health through the volatiles they produce or their aerially spread spores that may trigger our immune system or cause respiratory disease . Many yeasts, such as Candida and Malassezia, are associated with the human body, where they mainly grow as commensals . However, both yeasts and molds can cause superficial infections such as dandruff, atopic dermatitis/eczema, ringworm, and nail infections , as well as serious infections in immuno-compromised people, e.g., invasive aspergillosis, mucormycosis, and candidemia . The latter ones increased considerably during the COVID-19 pandemic . In addition to the fungi that can grow and survive indoors, fungal spores are transported indoors from outdoor sources and are detected in DNA-based surveys from the built environment . Fungal spores spread easily by air into buildings through windows, doors, and the ventilation system. Further, people and pets may function as vectors and transport fungal spores. The proportion of outdoor fungi spreading into buildings varies throughout the year, with a higher influx during the plant growth seasons, when fungi also are sporulating outdoors . In parts of the world, children of age 1–6 years spend considerable time inside daycare centers. Daycares are often characterized by a high density of people, which potentially influences air quality and humidity. Intensively used rooms have been suggested to allow higher yeast diversity in a study where yeasts were cultured from schools in Poland . In Norway, outdoor play is highly evaluated and children in daycares spend up to 70% and 31% of their time outside during the summer and winter, respectively . Thus, outdoor materials, such as sand, soil, dust, feces from birds and other animals, and plant debris, might easily be brought into daycares, constituting important biomass inputs for the indoor environment. Other elements usually not present in daycares, like potted plants and pets, are more common in homes, where the number of occupants is generally lower. In these respects, daycares may represent somewhat different environmental conditions for indoor fungal growth than homes. The indoor mycobiomes of daycares and private homes in Norway have previously been surveyed in separate studies, revealing a high prevalence of molds and yeasts in both building types . However, a direct comparison between these two settings is still lacking. The main differences between the homes and daycares are the number of occupants and their age distribution, while the buildings themselves often can be similar, including similar architecture and the same building materials. In addition, the temporal usage of homes and daycares differs; while daycares are used intensively over a few hours by many people, homes are often used by fewer people more throughout the whole day. Logistically, it is challenging to obtain samples from a high number of buildings representative of a wide geographic region. In this study, we therefore used a community science approach, recruiting inhabitants or daycare personnel to collect dust samples in a predefined simple manner, which allowed us to obtain a high number of samples throughout Norway for statistical comparisons. The central objective of this study was to compare indoor dust mycobiomes from homes and daycares distributed throughout Norway. More specifically, we aimed (i) to reveal whether different indoor mycobiomes can be found in the two building types and which fungal groups may differ, as well as (ii) to identify the factors that may be associated with these differences. Context and Original Datasets We compared two DNA metabarcoding datasets of indoors and outdoors dust samples from homes and daycares located throughout Norway (Supplementary Fig. ), which have been recently published . To recruit community scientists for sampling work, daycares were contacted by mail, while home inhabitants were largely approached through social media and scientific networks. Since the sampling scheme, material, and methods were thoroughly described in the original publications, we provide a condensed version here. Altogether 271 homes and 125 daycares throughout Norway were originally selected for sampling. However, the combined dataset of this study includes a more balanced number of indoor samples (428 from 214 homes and 411 from 123 daycares) and corrects the overrepresentation of Oslo area in the original home dataset. During spring 2018, inhabitants (homes) or personnel (daycares) collected dust samples on doorframes at three specific locations: (1) the main entrance outdoors, (2) main central room (living room in homes), and (3) bathroom. Large daycares sampled from two main central rooms and two bathrooms. The dust samples were obtained using the same sampling kits including sterile FLOQSwabs (Copan Italia spa, Brescia, Italy) and instructions. The returned swabs were stored at − 80 °C until DNA extraction. The inhabitants/personnel also provided metadata about the buildings such as the number of occupants, building features, and previously reported pests and water damages by responding to a questionnaire. In addition, based on the geographical coordinates of the buildings, data for some relevant environmental variables related to climate, geology, and topography were extracted from WorldClim 2 or provided by (see Supplementary Table for metadata). In brief, the DNA metabarcoding workflow included five steps: (i) DNA extraction from the swabs using chloroform and the EZNA Soil DNA Kit (Omega Bio-tek, Norcross, GA, USA); (ii) PCR amplification of the ITS2 region using the primers gITS7 and ITS4 , both including sample specific tags at the 5’-end; (iii) clean up and normalization of PCR products using SequalPrep Normalization Plates (Thermo Fisher Scientific, Waltham, MA, USA), and subsequent pooling of 96 uniquely barcoded samples including technical replicates, negative samples (unused swabs), extraction blanks, PCR negatives, and a mock community; and (v) library preparation and 250 bp paired-end MiSeq Illumina sequencing carried out at Fasteris SA (Plan-les-Ouates, Switzerland). Bioinformatics The bioinformatic analyses for the combined dataset from homes and daycares, whose raw sequences are available on ENA at EMBL-EBI ( https://www.ebi.ac.uk/ena/browser/view/PRJEB42161 ) and Dryad ( https://doi.org/ 10.5061/dryad.sn02v6x5s), respectively, were performed as described by Martin-Sanchez et al. and Estensmo et al. with slight modifications. Shortly, raw sequences were demultiplexed using CUTADAPT and sequences shorter than 100 bp discarded. DADA2 was used to filter low quality reads, error correction, merging in contigs, and chimera removal. ITSx was used to exclude the non-fungal sequences and trim the conserved regions of flanking rRNA genes. To account for intraspecific variability , the generated amplicon sequence variants (ASVs) were clustered into operational taxonomic units (OTUs) using VSEARCH at 97% similarity. LULU was used with default settings to correct for potential OTU over-splitting. Taxonomy of OTUs was assigned using the BLASTn algorithm against the UNITE and INSD dataset for fungi (v. 04.02.2020) . Ecological trophic modes and guilds for the identified taxa were annotated using the FUNGuild tool . OTUs with less than 10 reads and those that were not assigned to the kingdom Fungi were discarded from downstream analyses. For comparing daycares and homes, we downscaled the original datasets by excluding 2 daycares and 57 homes, hereby providing a more balanced dataset in terms of geographical location (15 homes per municipality maximum), collection date (all samples in April–May 2018), and number of indoor samples from homes vs. daycares (428 vs. 411 in the rarefied matrix). The OTU table was rarefied to 2540 reads per sample using the function rrarefy of the VEGAN R package v. 2.6–4 , keeping the majority of samples (only 18 samples were excluded). The final quality-filtered and rarefied matrix, without technical replicates, negative controls, and mock samples, contained 9107 OTUs from 1169 samples. Those OTUs with taxonomic assignment at species, genus, or family level were further annotated into growth forms (filamentous, yeast, dimorphic, lichen, and chytrid) based on literature surveys. Statistics Initially, we assessed OTU richness per sample, as well as the total number of OTUs and their overlaps for the two types of building (homes vs. daycares) and compartments (indoor vs. outdoor). For comparison of the indoor mycobiomes, beta diversity was assessed with NMDS ordination of dust samples using metaMDS from VEGAN R package v. 2.6–4, Bray–Curtis dissimilarity index and 200 random starts in search of stable solution on the Hellinger-transformed rarefied OTU tables. Continuous environmental variables were regressed against NMDS ordination and added as vectors on the ordination plots using gg_envfit from GGORDIPLOTS R package v 0.3.0 to visualize their association with the indoor dust mycobiomes. To evaluate the correlation between environmental variables and the observed variance in fungal community composition, permutational multivariate analysis of variance (PERMANOVA; 999 permutations) was performed individually on each variable using adonis2 from VEGAN R package v. 2.6–4 . Relative abundances of taxa at order and genus level were assessed to highlight the differences between homes and daycares. To reveal significant associations ( p < 0.05) between OTUs and the type of building, an indicator species analysis was performed using multipatt from INDICSPECIES R package v. 1.7.14 . Significant differences in the variance of OTU richness per sample and the relative abundances of selected genera were evaluated with the analysis of variance (ANOVA) and t -test. We compared two DNA metabarcoding datasets of indoors and outdoors dust samples from homes and daycares located throughout Norway (Supplementary Fig. ), which have been recently published . To recruit community scientists for sampling work, daycares were contacted by mail, while home inhabitants were largely approached through social media and scientific networks. Since the sampling scheme, material, and methods were thoroughly described in the original publications, we provide a condensed version here. Altogether 271 homes and 125 daycares throughout Norway were originally selected for sampling. However, the combined dataset of this study includes a more balanced number of indoor samples (428 from 214 homes and 411 from 123 daycares) and corrects the overrepresentation of Oslo area in the original home dataset. During spring 2018, inhabitants (homes) or personnel (daycares) collected dust samples on doorframes at three specific locations: (1) the main entrance outdoors, (2) main central room (living room in homes), and (3) bathroom. Large daycares sampled from two main central rooms and two bathrooms. The dust samples were obtained using the same sampling kits including sterile FLOQSwabs (Copan Italia spa, Brescia, Italy) and instructions. The returned swabs were stored at − 80 °C until DNA extraction. The inhabitants/personnel also provided metadata about the buildings such as the number of occupants, building features, and previously reported pests and water damages by responding to a questionnaire. In addition, based on the geographical coordinates of the buildings, data for some relevant environmental variables related to climate, geology, and topography were extracted from WorldClim 2 or provided by (see Supplementary Table for metadata). In brief, the DNA metabarcoding workflow included five steps: (i) DNA extraction from the swabs using chloroform and the EZNA Soil DNA Kit (Omega Bio-tek, Norcross, GA, USA); (ii) PCR amplification of the ITS2 region using the primers gITS7 and ITS4 , both including sample specific tags at the 5’-end; (iii) clean up and normalization of PCR products using SequalPrep Normalization Plates (Thermo Fisher Scientific, Waltham, MA, USA), and subsequent pooling of 96 uniquely barcoded samples including technical replicates, negative samples (unused swabs), extraction blanks, PCR negatives, and a mock community; and (v) library preparation and 250 bp paired-end MiSeq Illumina sequencing carried out at Fasteris SA (Plan-les-Ouates, Switzerland). The bioinformatic analyses for the combined dataset from homes and daycares, whose raw sequences are available on ENA at EMBL-EBI ( https://www.ebi.ac.uk/ena/browser/view/PRJEB42161 ) and Dryad ( https://doi.org/ 10.5061/dryad.sn02v6x5s), respectively, were performed as described by Martin-Sanchez et al. and Estensmo et al. with slight modifications. Shortly, raw sequences were demultiplexed using CUTADAPT and sequences shorter than 100 bp discarded. DADA2 was used to filter low quality reads, error correction, merging in contigs, and chimera removal. ITSx was used to exclude the non-fungal sequences and trim the conserved regions of flanking rRNA genes. To account for intraspecific variability , the generated amplicon sequence variants (ASVs) were clustered into operational taxonomic units (OTUs) using VSEARCH at 97% similarity. LULU was used with default settings to correct for potential OTU over-splitting. Taxonomy of OTUs was assigned using the BLASTn algorithm against the UNITE and INSD dataset for fungi (v. 04.02.2020) . Ecological trophic modes and guilds for the identified taxa were annotated using the FUNGuild tool . OTUs with less than 10 reads and those that were not assigned to the kingdom Fungi were discarded from downstream analyses. For comparing daycares and homes, we downscaled the original datasets by excluding 2 daycares and 57 homes, hereby providing a more balanced dataset in terms of geographical location (15 homes per municipality maximum), collection date (all samples in April–May 2018), and number of indoor samples from homes vs. daycares (428 vs. 411 in the rarefied matrix). The OTU table was rarefied to 2540 reads per sample using the function rrarefy of the VEGAN R package v. 2.6–4 , keeping the majority of samples (only 18 samples were excluded). The final quality-filtered and rarefied matrix, without technical replicates, negative controls, and mock samples, contained 9107 OTUs from 1169 samples. Those OTUs with taxonomic assignment at species, genus, or family level were further annotated into growth forms (filamentous, yeast, dimorphic, lichen, and chytrid) based on literature surveys. Initially, we assessed OTU richness per sample, as well as the total number of OTUs and their overlaps for the two types of building (homes vs. daycares) and compartments (indoor vs. outdoor). For comparison of the indoor mycobiomes, beta diversity was assessed with NMDS ordination of dust samples using metaMDS from VEGAN R package v. 2.6–4, Bray–Curtis dissimilarity index and 200 random starts in search of stable solution on the Hellinger-transformed rarefied OTU tables. Continuous environmental variables were regressed against NMDS ordination and added as vectors on the ordination plots using gg_envfit from GGORDIPLOTS R package v 0.3.0 to visualize their association with the indoor dust mycobiomes. To evaluate the correlation between environmental variables and the observed variance in fungal community composition, permutational multivariate analysis of variance (PERMANOVA; 999 permutations) was performed individually on each variable using adonis2 from VEGAN R package v. 2.6–4 . Relative abundances of taxa at order and genus level were assessed to highlight the differences between homes and daycares. To reveal significant associations ( p < 0.05) between OTUs and the type of building, an indicator species analysis was performed using multipatt from INDICSPECIES R package v. 1.7.14 . Significant differences in the variance of OTU richness per sample and the relative abundances of selected genera were evaluated with the analysis of variance (ANOVA) and t -test. OTU Richness A weak, but significant difference in indoor fungal richness between the two building types was detected; we obtained on average 160 and 149 OTUs per sample for the indoor samples from homes and daycares, respectively ( t -test, p = 0.02; Fig. a). Further, for homes, the fungal richness within the buildings was significantly higher than in the outdoor dust samples ( p = 1.4e-14). Comparably, this increase was not significant for daycares ( p = 0.34; Fig. a). In total, the daycare dataset had more OTUs than the homes dataset (7419 and 6408 OTUs, respectively; Fig. b). For both homes and daycares, only 11–12% of the fungal OTUs appeared uniquely outdoors, while 41–47% were uniquely found indoors. In addition, the 49% of indoor fungi (OTUs) were found in both types of buildings, while 20% and 31% of them were uniquely associated with homes and daycares, respectively (Supplementary Fig. ). Indoor Community Composition The community composition of the indoor mycobiomes was distinctly different in daycares and homes (Fig. a). A high number of factors were significantly correlated to the mycobiome composition, but accounted only for small proportions of the variation (Fig. b). The building type (daycare vs. homes) accounted for most of the variation in the indoor mycobiomes (6.3%), followed by the number of occupants (4.2%), and the ventilation system of the building (balanced versus mechanical or natural; 3.5%). In addition, climate variables related to outdoor temperature and precipitation each explained less than 2.1% of the variation in the indoor mycobiome composition. We observed distinct differences in the taxonomic composition between the two building types (Fig. a). The orders Saccharomycetales, Filobasidiales, and Tremellales were proportionally more abundant in daycares. Further, on genus level, ascomycetous yeasts, like Saccharomyces, Candida , and Debaryomyces , as well as basidiomycetous yeasts like Cryptococcus, Filobasidium, Malassezia, Naganishia , and Rhodotorula, were proportionally more abundant in daycares compared to homes (Fig. , t -test p < 10e-5). In homes, saprotrophic and plant pathogenic filamentous ascomycetes in the orders Capnodiales, Dothideales, Eurotiales, and Helotiales were relatively more abundant (Fig. a). These orders include mold genera such as Alternaria, Aspergillus, Cladosporium , and Penicillium , all proportionally more abundant in homes (Fig. ). In contrast, the two mold genera Wallemia (Basidiomycota) and Mucor (Mucoromycota) were proportionally more abundant in daycares (Fig. ). Indicator species analysis also supported these findings and identified some yeasts ( Filobasidium , Cryptococcus , Saccharomyces , and Cyberlindnera ) and Mucor species as the strongest daycare indicators (IndVal > 50%), and the typical molds ( Penicillium , Alternaria , Aspergillus , Cladosporium species) as home indicators (Supplementary Table ). When annotating the OTUs in the final rarefied matrix (6971 of 9107 OTUs; 76.5%) into growth forms, we observed a clear difference in the distribution of yeasts, mycelial fungi, and dimorphic fungi between the two building types (Fig. b), where yeasts are relatively more abundant in daycares while mycelial fungi are relatively more abundant in homes. A weak, but significant difference in indoor fungal richness between the two building types was detected; we obtained on average 160 and 149 OTUs per sample for the indoor samples from homes and daycares, respectively ( t -test, p = 0.02; Fig. a). Further, for homes, the fungal richness within the buildings was significantly higher than in the outdoor dust samples ( p = 1.4e-14). Comparably, this increase was not significant for daycares ( p = 0.34; Fig. a). In total, the daycare dataset had more OTUs than the homes dataset (7419 and 6408 OTUs, respectively; Fig. b). For both homes and daycares, only 11–12% of the fungal OTUs appeared uniquely outdoors, while 41–47% were uniquely found indoors. In addition, the 49% of indoor fungi (OTUs) were found in both types of buildings, while 20% and 31% of them were uniquely associated with homes and daycares, respectively (Supplementary Fig. ). The community composition of the indoor mycobiomes was distinctly different in daycares and homes (Fig. a). A high number of factors were significantly correlated to the mycobiome composition, but accounted only for small proportions of the variation (Fig. b). The building type (daycare vs. homes) accounted for most of the variation in the indoor mycobiomes (6.3%), followed by the number of occupants (4.2%), and the ventilation system of the building (balanced versus mechanical or natural; 3.5%). In addition, climate variables related to outdoor temperature and precipitation each explained less than 2.1% of the variation in the indoor mycobiome composition. We observed distinct differences in the taxonomic composition between the two building types (Fig. a). The orders Saccharomycetales, Filobasidiales, and Tremellales were proportionally more abundant in daycares. Further, on genus level, ascomycetous yeasts, like Saccharomyces, Candida , and Debaryomyces , as well as basidiomycetous yeasts like Cryptococcus, Filobasidium, Malassezia, Naganishia , and Rhodotorula, were proportionally more abundant in daycares compared to homes (Fig. , t -test p < 10e-5). In homes, saprotrophic and plant pathogenic filamentous ascomycetes in the orders Capnodiales, Dothideales, Eurotiales, and Helotiales were relatively more abundant (Fig. a). These orders include mold genera such as Alternaria, Aspergillus, Cladosporium , and Penicillium , all proportionally more abundant in homes (Fig. ). In contrast, the two mold genera Wallemia (Basidiomycota) and Mucor (Mucoromycota) were proportionally more abundant in daycares (Fig. ). Indicator species analysis also supported these findings and identified some yeasts ( Filobasidium , Cryptococcus , Saccharomyces , and Cyberlindnera ) and Mucor species as the strongest daycare indicators (IndVal > 50%), and the typical molds ( Penicillium , Alternaria , Aspergillus , Cladosporium species) as home indicators (Supplementary Table ). When annotating the OTUs in the final rarefied matrix (6971 of 9107 OTUs; 76.5%) into growth forms, we observed a clear difference in the distribution of yeasts, mycelial fungi, and dimorphic fungi between the two building types (Fig. b), where yeasts are relatively more abundant in daycares while mycelial fungi are relatively more abundant in homes. Previous dust-mycobiome studies have also observed a higher diversity (richness) of fungi indoors . This phenomenon can be explained by the fact that many outdoor fungi have the ability to enter buildings, while the reverse is apparently not the case to the same degree. Hence, the outdoor environment represents a major source of inoculum to the indoor environment, as also observed in previous studies . The clear differences in indoor community composition between daycares and homes suggest that the number of occupants, and possibly their age profiles, are important drivers for the indoor dust mycobiomes. Previous research has also reported higher airborne fungal loads (measured in colony forming units per m 3 ) in daycares compared to homes . The fact that the included environmental factors only account for a small part of the variation in community composition is a common feature in fungal community studies. The assembly process of fungal communities is probably strongly influenced by random processes, such as spore dispersal and colonization , making exact predictions of mycobiome composition difficult. Furthermore, there is a high temporal (within-year) variation in fruiting and sporulation of outdoor fungi, especially in temperate regions, which is also reflected in the indoor mycobiomes due to the influx of spores . In our previous temporal study of the mycobiomes in two daycares , dust samples were collected throughout a year in order to evaluate the effect of seasonality on the indoor mycobiomes using DNA metabarcoding. This showed a strong seasonal pattern in the mycobiome composition, with higher fungal richness in summer and fall. Hence, in analyses of indoor fungi, it is important to consider the temporal variability by obtaining samples at approximately the same time or conducting repeated sampling. In the present study, the samples were collected throughout Norway at the same time period (April–May). Thus, even if the climate varies across the country, both the daycare and the home dataset are affected by the same climate variables. As for all environmental DNA-based studies, the taxonomic annotation here might show low resolution and/or errors due to both the short barcode and the correctness of the used sequence database. Thus, we decided to not report or discuss taxonomy at the species level. Even at the genus level, we are aware of the possible misidentification between certain genera, e.g., those belonging to Saccharomycetales ( Candida , Debaryomyces , and Saccharomyces ). However, this potential limitation would not affect to the overall pattern observed between molds and yeasts in the two building types. We suggest two different hypotheses that may explain this proportional difference. First, more yeasts may be associated with young children, driving the difference. It has been documented that children have a more diverse fungal skin community compared to adults, including genera such as Aspergillus , Epicoccum , Cladosporium , Candida , Rhodotorula , Cryptococcus , and Phoma , in addition to the obligatory lipophilic yeast genus Malassezia that dominates on the skin of adults . Moreover, the higher density of people per se may drive the proportional difference, since yeasts are more associated with the human body than molds . Besides, Adams et al. reported a significant overlap between the mycobiomes associated with indoor environmental samples (dust and surfaces) and those from the occupants’ skin. Several fungal genera with yeast growth such as Candida , Malassezia , and Saccharomyces can also be found in the gastrointestinal tract . A higher density of people may therefore lead to a proportional difference between yeasts and molds, which may be mediated in part by the deposition of occupants’ dead skin cells on the indoor surfaces. There seemed to be an even stronger difference in community composition between homes and daycares with many children (Fig. a), which may further support the latter hypothesis. However, to be able to conclude on this topic, more in-depth studies with cross-factorial, balanced study design, tentatively also including investigations of the skin/body mycobiome, are needed. In addition, other possible factors that may differ between private houses and daycares, such as food preferences , or possibly, the abundance of invertebrates such as dust mites, could be taken into consideration. Previous research has also shown that indoor environments, such as healthcare centers , homes , and schools , exhibit high yeast diversity. While Marques do Nascimento et al. , Hashimoto et al. , and Ejdys et al. specifically investigated yeasts by culturing, Park et al. conducted metagenomic sequencing of all organisms in 500 classrooms. Both approaches identified a substantial level of yeast diversity including the genera Candida , Debaryomyces , Rhodotorula , Cryptococcus , Naganishia , Filobasidium , and Cyberlindnera . Overall, this study showed a striking difference in the relative distribution of yeasts and filamentous fungi in daycares and homes, where yeasts were proportionally more abundant in daycares and vice versa. Whether this difference is directly coupled to health effects is unknown. Molds have been shown to cause asthma and other respiratory diseases in humans in moist environments . Furthermore, moisture in homes, in addition to the level of fungal spores outdoors, were the best predictors to indoor fungal spore concentrations in 190 homes in Paris, France . Moisture in schools, but not microbes, was the best predictor of respiratory problems in school children in the Netherlands and Finland . However, a recent birth cohort study in Finnish homes reported that early-life exposure to home dust mycobiomes do not have clear negative or positive effects on asthma development in children . Despite the clear association between some yeasts (e.g., Malassezia and Candida ) and skin disorders (atopic dermatitis and mucocutaneous candidiasis, respectively) , some studies have pointed out a potential protective role of the dust yeast exposure against allergies and asthma in children . Thus, the marked difference in the proportional abundance of molds and yeasts in the different building types may not lead to negative effects for the occupants. To gain further insight on this topic, future studies should assess inhabitant’s health status coupled to the indoor mycobiomes. Below is the link to the electronic supplementary material. Supplementary file1 (PDF 239 KB)
Transfer accuracy of partially enclosed single hard vacuum-formed trays with 3D-printed models for lingual bracket indirect bonding: A prospective in-vivo study
eebd9291-9e02-474a-8eee-8b8547dd2f6f
11753703
Dentistry[mh]
Bracket transfer accuracy is crucial in lingual orthodontic treatment, securing the precise transfer of prescription and overcorrection from the ideal setup to the patient’s dentition . Digital workflows with three-dimensional (3D) printing technology may simplify the manufacturing process of indirect bonding trays . Additionally, the transfer accuracy may be enhanced compared to analog ones due to the ability to transfer a group of teeth simultaneously with greater anatomical guidance, achieved by virtual repositioning of teeth and accompanying brackets to the initial malocclusion state . Double vacuum-formed indirect bonding trays are widely used and demonstrated to accurately transfer both labial and lingual brackets in linear and rotational dimensions . However, the transfer accuracy of tip and torque remains questionable. This inaccuracy may result from the flexibility of the inner soft layer and the possible separation between the two layers. Additionally, double vacuum-formed trays cannot be used for single-tooth bonding, in cases of severely crowded teeth or bracket failures, due to the lack of a passive fit between trays and teeth . Therefore, a modified design of vacuum-formed trays with a single hard layer is proposed. This study aims to evaluate the clinical transfer accuracy of partially enclosed single hard vacuum-formed trays based on 3D-printed models for lingual bracket indirect bonding. The null hypothesis stated that bracket transfer errors were non-inferior to the clinically acceptable thresholds of 0.5 mm and 2°. Subject This prospective clinical study was approved by the Institutional Ethical Review Board at Hanoi Medical University (protocol no. 2301). All participants were thoroughly counseled about the study and signed informed consent. A sample of 32 patients, including 3 males and 29 females, aged 21–35 years, were consecutively enrolled from September 9 th 2023 to March 7 th 2024. Of these, 13 patients received lingual appliances on both arches, while 19 received them only on the upper arch. The arch-length discrepancies of the upper and lower arches were 3.42 ± 2.85 mm and 6.89 ± 3.30 mm, respectively. Non-extraction treatment was provided for 12 patients, while premolar extraction treatment was performed on 20 patients. A sample size calculation, based on a previous study’s effect size of 0.149 for vacuum-formed trays in lingual bracket indirect bonding, determined that 279 brackets were needed to achieve 80% power in detecting statistically significant mean transfer errors below 0.5 mm or 2° using one-sample t-tests at a significance level of 0.05 . Tray fabrication and lingual bracket bonding Digital impressions were taken with an i700 intraoral scanner (Medit, Seoul, Korea). The scan data were imported into Autolign orthodontic software (Diorco, Gyeonggi-do, Korea) for tooth segmentation and ideal setup creation. ADB lingual brackets (Medico, Gyeonggi-do, Korea) were positioned with the straight archwire concept on the ideal setupand virtually moved together with corresponding teeth back to the initial malocclusion state . The gaps between the brackets and teeth were virtually filled. Additionally, bracket slots and undercuts below bracket wings were virtually blocked out to facilitate subsequent bracket placement and tray removal. The resulting virtual models with ideal bracket positions were 3D-printed with a Photon D2 digital light processing printer (Anycubic, Shenzen, China). Indirect bonding trays were vacuum-formed on the 3D-printed models with single Biocryl hard foils of 1 mm thickness (Scheu Dental, Iserlohn, Germany). After being removed from the models, the trays were sectioned into four parts, including two anterior and two posterior segments. The gingival wall of all bracket lodgements was removed with a rotary disc to create a partially enclosed design, facilitating excess adhesive and tray removal clinically . On clinical bonding, the lingual tooth surfaces were pumiced and etched with 37% phosphoric acid for 20 seconds, followed by rinsing and drying. Assure Plus primer (Reliance, Itasca, IL) was applied to the tooth surfaces and GoTo adhesive (Reliance) was applied to the bracket bases. The tray was then seated with light finger pressure on the occlusal surface, followed by excess flash removal and light-curing for 40 seconds. Pointed burs were used to partially grind out hard tray materials around the brackets, facilitating tray removal without debonding the brackets. Data acquisition Post-bonding digital impressions were taken with the same intraoral scanner to assess the discrepancy between actual and planned bracket positions. The measurement of bracket transfer error followed a previously established methodology . The post-bonding scan data and the virtual model with ideal bracket positions, serving as the target and reference data, were first roughly aligned. Then, they were segmented into individual teeth, removing the gingiva and retaining only the clinical crowns and brackets. This initial alignment ensured consistent cropping and facilitated the subsequent superimposition based on the tooth surface. The centers of the manufacturer-provided virtual bracket patches without bracket bases were set at the origin of a coordinate system, with the x, y, and z-axis parallel to the mesiodistal edge, vertical edge, and buccolingual edge of the bracket slot, respectively. For each tooth, the brackets on both the target and reference data were initially aligned with the bracket patch, constituting the first and second superimpositions. Subsequently, the bracket patch was attached to the target data followed by superimposing the target data and bracket patch combination onto the reference data using a local best-fit algorithm that considered only the tooth surfaces. The new coordinates of the bracket patch’s center in the x, y, and z directions would indicate the bracket linear transfer errors. Meanwhile, the angular transfer errors would be represented by the rotation of the bracket slot’s edges projected onto the respective coordinate planes . The 3D inspection software (Meshmixer, Autodesk, USA) was utilized to obtain these measurements. For each bracket, linear transfer errors were presented in the mesiodistal, buccolingual, and occlusogingival dimensions. Additionally, angular transfer errors were described as rotation, tip, and torque. Absolute values were calculated for each component to prevent the cancellation between positive and negative values. To assess measurement reliability, a second examiner remeasured 50 brackets followed by calculating the intraclass correlation coefficient (ICC). Statistical analysis Data analyses were performed using SPSS 23.0 software (IBM, Armonk, NY) with the statistical significance level set to α = 0.05. The data were assessed for the normal distribution using Kolmogorov-Smirnov tests. Transfer errors of each tooth group and the entire sample were presented as means and standard deviations. Two-sample t-tests comparing right and left values revealed no significant differences, warranting the combination of right and left teeth in the same tooth group for further analysis. One-tailed t-tests were performed to determine whether the bracket transfer errors were statistically within the clinically acceptable thresholds of 0.5 mm and 2° for linear and angular dimensions, respectively. These clinically acceptable thresholds, widely used in previous studies on bracket transfer accuracy, were employed based on the objective grading system of the American Board of Orthodontics . Additionally, the percentages of bracket transfer errors within the clinically acceptable thresholds were described. This prospective clinical study was approved by the Institutional Ethical Review Board at Hanoi Medical University (protocol no. 2301). All participants were thoroughly counseled about the study and signed informed consent. A sample of 32 patients, including 3 males and 29 females, aged 21–35 years, were consecutively enrolled from September 9 th 2023 to March 7 th 2024. Of these, 13 patients received lingual appliances on both arches, while 19 received them only on the upper arch. The arch-length discrepancies of the upper and lower arches were 3.42 ± 2.85 mm and 6.89 ± 3.30 mm, respectively. Non-extraction treatment was provided for 12 patients, while premolar extraction treatment was performed on 20 patients. A sample size calculation, based on a previous study’s effect size of 0.149 for vacuum-formed trays in lingual bracket indirect bonding, determined that 279 brackets were needed to achieve 80% power in detecting statistically significant mean transfer errors below 0.5 mm or 2° using one-sample t-tests at a significance level of 0.05 . Digital impressions were taken with an i700 intraoral scanner (Medit, Seoul, Korea). The scan data were imported into Autolign orthodontic software (Diorco, Gyeonggi-do, Korea) for tooth segmentation and ideal setup creation. ADB lingual brackets (Medico, Gyeonggi-do, Korea) were positioned with the straight archwire concept on the ideal setupand virtually moved together with corresponding teeth back to the initial malocclusion state . The gaps between the brackets and teeth were virtually filled. Additionally, bracket slots and undercuts below bracket wings were virtually blocked out to facilitate subsequent bracket placement and tray removal. The resulting virtual models with ideal bracket positions were 3D-printed with a Photon D2 digital light processing printer (Anycubic, Shenzen, China). Indirect bonding trays were vacuum-formed on the 3D-printed models with single Biocryl hard foils of 1 mm thickness (Scheu Dental, Iserlohn, Germany). After being removed from the models, the trays were sectioned into four parts, including two anterior and two posterior segments. The gingival wall of all bracket lodgements was removed with a rotary disc to create a partially enclosed design, facilitating excess adhesive and tray removal clinically . On clinical bonding, the lingual tooth surfaces were pumiced and etched with 37% phosphoric acid for 20 seconds, followed by rinsing and drying. Assure Plus primer (Reliance, Itasca, IL) was applied to the tooth surfaces and GoTo adhesive (Reliance) was applied to the bracket bases. The tray was then seated with light finger pressure on the occlusal surface, followed by excess flash removal and light-curing for 40 seconds. Pointed burs were used to partially grind out hard tray materials around the brackets, facilitating tray removal without debonding the brackets. Post-bonding digital impressions were taken with the same intraoral scanner to assess the discrepancy between actual and planned bracket positions. The measurement of bracket transfer error followed a previously established methodology . The post-bonding scan data and the virtual model with ideal bracket positions, serving as the target and reference data, were first roughly aligned. Then, they were segmented into individual teeth, removing the gingiva and retaining only the clinical crowns and brackets. This initial alignment ensured consistent cropping and facilitated the subsequent superimposition based on the tooth surface. The centers of the manufacturer-provided virtual bracket patches without bracket bases were set at the origin of a coordinate system, with the x, y, and z-axis parallel to the mesiodistal edge, vertical edge, and buccolingual edge of the bracket slot, respectively. For each tooth, the brackets on both the target and reference data were initially aligned with the bracket patch, constituting the first and second superimpositions. Subsequently, the bracket patch was attached to the target data followed by superimposing the target data and bracket patch combination onto the reference data using a local best-fit algorithm that considered only the tooth surfaces. The new coordinates of the bracket patch’s center in the x, y, and z directions would indicate the bracket linear transfer errors. Meanwhile, the angular transfer errors would be represented by the rotation of the bracket slot’s edges projected onto the respective coordinate planes . The 3D inspection software (Meshmixer, Autodesk, USA) was utilized to obtain these measurements. For each bracket, linear transfer errors were presented in the mesiodistal, buccolingual, and occlusogingival dimensions. Additionally, angular transfer errors were described as rotation, tip, and torque. Absolute values were calculated for each component to prevent the cancellation between positive and negative values. To assess measurement reliability, a second examiner remeasured 50 brackets followed by calculating the intraclass correlation coefficient (ICC). Data analyses were performed using SPSS 23.0 software (IBM, Armonk, NY) with the statistical significance level set to α = 0.05. The data were assessed for the normal distribution using Kolmogorov-Smirnov tests. Transfer errors of each tooth group and the entire sample were presented as means and standard deviations. Two-sample t-tests comparing right and left values revealed no significant differences, warranting the combination of right and left teeth in the same tooth group for further analysis. One-tailed t-tests were performed to determine whether the bracket transfer errors were statistically within the clinically acceptable thresholds of 0.5 mm and 2° for linear and angular dimensions, respectively. These clinically acceptable thresholds, widely used in previous studies on bracket transfer accuracy, were employed based on the objective grading system of the American Board of Orthodontics . Additionally, the percentages of bracket transfer errors within the clinically acceptable thresholds were described. A single clinician bonded 559 lingual brackets, of which 17 debonded during tray removal, resulting in a 3.04% debonding rate. Due to crowding, bracket placement was not possible on 18 teeth during the initial bonding appointment. After excluding 3 brackets due to poor scan quality, 539 brackets were analyzed to assess the transfer accuracy. The total bonding time, from the initial placement of the first tray segment to the removal of the final segment, was 22.23 ± 6.13 minutes for the upper arch and 21.77 ± 5.31 minutes for the lower arch. The inter-examiner ICCs were 0.965 for linear dimensions and 0.944 for angular dimensions, indicating the high reliability of the measuring methodology. presents the means and standard deviations of bracket transfer errors and the result of one-tailed t-tests. shows box-and-whisker plots of bracket transfer errors for each tooth group. Of the entire sample, the linear transfer errors were 0.052 ± 0.044 mm, 0.076 ± 0.065 mm, and 0.106 ± 0.098 mm for the mesiodistal, buccolingual, and occlusogingival dimensions, respectively. The angular transfer errors of the entire sample were 0.795 ± 0.720°, 1.344 ± 1.138°, and 2.485 ± 2.318° for rotation, tip, and torque, respectively. Comparisons between upper and lower teeth, as well as among different tooth groups, revealed no consistent patterns, with the exception of consistently lower angular transfer errors observed in the lower premolar and molar brackets compared to their upper counterparts. One-tailed t-tests revealed statistical significance (P < .05) for all linear dimensions across all tooth groups. Regarding angular dimensions, one-tailed t-tests indicated statistical significance (P < .05) for rotation and tip, but not for torque, in all tooth groups. Within each tooth type, linear transfers generally demonstrated greater accuracy than angular transfers. Among linear transfer errors, the occlusogingival dimension typically showed the highest magnitude, followed by the buccolingual and then mesiodistal dimensions, except for upper canines. For angular dimensions, torque was consistently the least accurate, followed by tip and then rotation across all tooth types. shows the percentages of bracket transfer errors that fall within the clinically acceptable thresholds of 0.5 mm and 2° for each tooth group. Regarding linear errors, 100% of brackets exhibited mesiodistal and buccolingual transfer errors within the acceptable threshold, while 99.3% of brackets demonstrated acceptable occlusogingival transfer accuracy. Among angular errors, the percentage of rotational accuracy was highest at 93.1%, while that of torque was lowest at 54.0%. Bracket transfer errors within the acceptable thresholds occur with similar frequency across all tooth types without consistent patterns. One-tailed t-tests revealed that linear bracket transfer errors for mesiodistal, buccolingual, and occlusogingival dimensions were statistically significantly within the clinically acceptable threshold of 0.5 mm. Similarly, angular transfer errors for rotation and tip were statistically significantly less than the 2° threshold, while torque errors did not show this significance level. Thus, the null hypothesis was rejected. The higher proportion of female patients in this study aligns with findings from previous research, which may reflect differing aesthetic preferences or priorities between genders . The bracket debonding rate with single hard vacuum-formed trays (3.04%) was higher in this study compared to the rate reported by Anh et al (0.98%) using double vacuum-formed trays . This difference may be attributable to the increased rigidity of the single hard trays, potentially making tray removal more challenging, especially in cases of crowding requiring varied bracket insertion directions. Additionally, bonding time was longer with single hard vacuum-formed trays compared to the double trays used by Anh et al. This could be due to the need for partial grinding of the tray material around each bracket to facilitate tray removal, a process requiring additional time. Conversely, the increased rigidity of single hard trays, compared to the inner soft layer of double vacuum-formed trays, may provide greater spatial stability, potentially resulting in improved angular accuracy . This is supported by the finding that angular transfer errors in this study were lower than those reported by Anh et al using double trays. Furthermore, the increased rigidity of single hard trays may facilitate a more passive fit. This is particularly beneficial for single-tooth bonding where anatomical landmarks are limited, a recognized limitation of double vacuum-formed trays. This advantage becomes even more significant with lingual appliances, as the shorter inter-bracket spans in lingual orthodontics may necessitate bonding teeth sequentially rather than all at once at the beginning of treatment. However, transferring a group of teeth simultaneously during initial bonding is preferred to enhance anatomical guidance, as evidenced by the superior bracket transfer accuracy observed in this study compared to a previous study using individual bracket transfer jigs . Single vacuum-formed trays also offer the advantage of material saving compared to double trays . This study assesses a modified design for vacuum-formed trays, diverging from the current trend of utilizing directly 3D-printed trays for indirect bonding. However, using single vacuum-formed trays with 3D-printed models is more cost-effective than direct 3D-printing of trays, due to simplified design and reduced post-processing requirements . As no studies in the literature have evaluated the accuracy of 3D-printed indirect bonding trays for lingual brackets, this study’s findings are compared with those from a study using 3D-printed trays for labial brackets . That study reported mean transfer errors of 0.10 mm mesiodistally, 0.10 mm buccolingually, 0.18 mm occlusogingivally, 2.47° in rotation, 2.01° in tip, and 2.55° in torque. While the linear transfer errors were within the clinically acceptable threshold, the angular errors were not. Notably, all transfer errors in that study exceeded those observed in the current study. This discrepancy may be attributed to the flexibility of the tray material and the inadequate bracket retention within the tray, as silicone and modeling wax were necessary to secure brackets in a similar study utilizing 3D-printed trays . Furthermore, as both the 3D-printing material and the orthodontic adhesive are light-cured, their adhesive natures may interact, potentially hindering tray removal and causing tearing . In this study, hard foils with an initial thickness of 1 mm were used, resulting in an approximate 0.5 mm thickness after vacuum forming. Thinner 0.8 mm foils were not suitable, as they failed to securely hold brackets during preliminary testing. Conversely, thicker 1.5 mm foils hindered complete bracket insertion, likely due to the increased rounding of bracket lodgement edges during cooling, leading to a mismatch with the bracket shape. The partially enclosed design of bracket lodgements, similar to that of 3D-printed indirect bonding trays, facilitates both excess flash removal and tray removal . This is advantageous compared to fully enclosed designs, where the gingival wall may impede tray removal and increase the risk of bracket debonding, particularly with hard tray materials. Blocking out bracket slots and undercuts is necessary to prevent tray material from entering these spaces, ensuring that brackets can be fully inserted into their lodgements. Although the lower arch exhibited more crowding than the upper arch, no consistent pattern in transfer accuracy was observed between the upper and lower anterior teeth. This is attributed to the exclusion of severely crowded teeth from the initial bonding and subsequent analysis. The higher angular transfer accuracy of lower premolar and molar brackets compared to their upper counterparts may be attributed to enhanced visibility in the lower arch during lingual bracket bonding, allowing for easier verification of correct bracket and tray positioning. The higher linear transfer accuracy compared to angular accuracy aligns with previous studies on both lingual and labial brackets, suggesting that angular bracket positioning is inherently less stable than linear positioning . Additionally, challenges in determining bracket axes on scan data due to nonparallel edges and distorted surfaces caused by rounding effects, as well as limitations in scanning reflective metal surfaces, may further contribute to angular transfer errors . The mesiodistal dimension exhibited the highest linear transfer accuracy, likely due to the secure hold provided by both the mesial and distal walls of the bracket lodgements. Buccolingual transfer accuracy was slightly lower, potentially because while bracket positioning in this dimension is influenced by both lingual walls and tooth surfaces, the designed gap between the bracket base and tooth surface reduces the tooth surface’s stabilizing effect. The lowest linear transfer accuracy was observed in the occlusogingival dimension, likely due to vertical bracket positions being controlled solely by the occlusal walls of the lodgements. Similarly, for angular accuracy, the highest control was seen in rotation, due to the combined influence of mesial, distal, lingual walls, and tooth surfaces. Tip control was moderate, primarily influenced by the mesial and distal walls. Torque exhibited the lowest angular transfer accuracy, likely due to control primarily from the lingual walls and only partial control from the tooth surfaces. The high bracket transfer accuracy, with mean transfer errors within clinically acceptable limits in five out of six analyzed dimensions and a nearly 100% rate of accurate linear transfer, demonstrates the compatibility of single hard vacuum-formed trays with 3D-printed models for clinical lingual bracket bonding. Any suboptimal tooth positions resulting from significant transfer errors can be corrected during the finishing stage through wire bending or bracket repositioning. This study has several limitations. First, it lacked control groups using alternative methods such as double vacuum-formed or directly 3D-printed indirect bonding trays. Second, transfer accuracy for single-tooth bonding was not assessed. Additionally, only one type of lingual bracket was evaluated. Future studies should compare the transfer accuracy of various indirect bonding tray types across different lingual bracket designs. Our study showed high lingual bracket transfer accuracy of partially enclosed single hard vacuum-formed trays with 3D-printed models in the mesiodistal, buccolingual, and occlusogingival dimensions, rotation, and tip. However, the transfer of torque remains questionable. Linear transfer accuracy is generally higher than angular transfer accuracy. Bracket transfer errors outside the acceptable thresholds exhibit similar frequency across all tooth types. S1 File Dataset. (XLSX) S2 File Reporting checklist for cohort study. (DOCX)
Multivariate classification of multichannel long-term electrophysiology data identifies different sleep stages in fruit flies
78f38f55-9baa-4f39-9f9e-7a2d0805123f
10881036
Physiology[mh]
Humans spend a third of their life engaged in sleep, wherein they become less responsive to external stimuli. Most animals studied so far, starting from the tiny fruit fly to the large sperm whale , display extended periods of quiescence, which are now categorized as sleep. Evolutionary conservation of the sleep state in all animals suggests that its benefits outweigh the potential risks and vulnerabilities brought on by losing awareness of one’s external environment. Sleep deprivation has been shown to produce deficits in learning and memory , immune system malfunction , and stress regulation . However, the organization of sleep in relation to its potential functions remains unclear. Different theories have been proposed for functions of sleep including those involving processes such as neuronal plasticity and synaptic downscaling and metabolic waste clearance . However, sleep research methodology is largely driven by research in humans and other mammals so the primary way of classifying sleep states has therefore been using electrophysiological readouts, such as electroencephalography (EEG). By identifying distinct electrical signatures associated with the different stages of sleep, different functional roles have been hypothesized for them. For example, rapid eye movement (REM) sleep in mammals has been proposed to regulate motor learning and memory consolidation , while slow wave sleep has been proposed to regulate synaptic strength and cellular homeostasis mechanisms . One of the primary challenges for understanding sleep architecture in invertebrates has been developing a capacity to record and assess brain-wide patterns of electrical activity across long time periods that encompass several sleep-wake transitions. In this context, small animals such as the fruit fly Drosophila melanogaster present as extremely challenging subjects, although they already potentially provide a wealth of molecular genetic tools to help better understand sleep biology. Previous sleep studies in flies have either recorded just a single local field potential (LFP) channel during spontaneous sleep bouts or from multichannel probes during short (~15 min) bouts of genetically induced sleep . In other work, whole-brain calcium imaging in sleep-deprived flies revealed distinct stages of spontaneous sleep , although these recordings were rarely long enough to display any revealing sleep architecture, and it remains unclear how these different sleep stages might be manifested across the fly brain from the central complex to optic lobes. Some reasons for the lack of whole-brain or multichannel sleep data in Drosophila are technical in nature: (i) It is difficult to perform long-term electrophysiological recordings with multiple electrodes in such small brains, the survival rate is low, and the recording tools used do not yet allow for consistent spatial positioning of multiple electrodes in different flies. (ii) Calcium imaging, on the other hand, which lacks in temporal precision compared to LFPs, does allow for consistent spatial locations of recordings (with image registration tools); however, concerns with photobleaching and phototoxicity have made it difficult to achieve the long-term recordings to acquire spontaneous sleep data. Subsampling provides one solution: For example, in a recent study, 24-hour recordings were conducted by recording for only 1 s after every minute (thus recording for only 1.6% of the overall time) . However, this subsampling approach might miss important sleep transitions or longer-lasting sleep phenomena. To best compare the brain activity during sleep in flies with similar data from other animals would ideally involve similar readouts akin to a whole-brain EEG, which, in Drosophila , would necessarily involve miniaturized multichannel probes such as used previously for visual studies as well induced sleep and anesthesia experiments . In addition, these recordings would ideally be supplemented by detailed behavioral analysis beyond the simple locomotory determinants that have traditionally defined sleep in flies . Mammalian sleep stages involve distinct microbehaviors in addition to electrophysiological correlates , and this seems to be true for some invertebrates as well . In this study, we optimized a multichannel LFP recording preparation for Drosophila flies, to track long-term neural activity in 16 channels across one hemisphere of the fly brain, in a transect from the retina to the central complex. The flies underwent spontaneous sleep bouts while walking/resting on an air-supported ball and survived long enough to provide 20 hours of data over one day-night cycle. We developed calibration tools to consistently record from similar spatial locations in different flies. We used machine learning–based methods [support vector machines (SVMs) and random forest classifiers] to first investigate the structure of sleep bouts and further explored the spectral features across multiple brain channels. We also used machine learning techniques (pose tracking and identification) to identify fly microbehaviors during these long-term recordings and to determine their potential association with different sleep stages. Together, our analyses identify neural correlates of sleep stages in the fly central brain, associated with rhythmic proboscis extensions (PEs) as a key behavioral feature. We find that the LFP features associated with PEs during wake and sleep are dissimilar, suggesting that a distinct brain state is driving the sleep functions associated with this rhythmic microbehavior. Behavioral analysis of tethered flies during sleep and wake Before conducting any electrophysiological recordings, we first investigated how flies slept when tethered to a rigid metal post while being able to walk on an air-supported ball . Flies were filmed overnight under infrared illumination, and locomotory behavior was quantified using a pixel subtraction method to identify sleep epochs, defined by the absence of locomotion or grooming behavior for 5 min or more . We also tracked the movement of different body parts, including the proboscis, antennae, and abdomen to detect potential microbehaviors during sleep. For this, we used machine learning [DeepLabCut ] to train a classifier to track microbehavioral movements through wake and sleep . As shown previously , tethered flies were able to sleep in this context ( and fig. S1A). Consistent with a previous study , we also observed regular PEs during sleep bouts (fig. S1B), which often occurred in rhythmic succession ( , orange trace). We also observed antennal movements and found that these were periodic in a subset of flies ( , red trace). Since both antennal movements and PEs were often rhythmic during sleep, we characterized both microbehaviors in the frequency domain ( , top) to determine whether these were different between sleep and wake. We found that a greater proportion of the sleeping states displayed both antennal periodicity and PE periodicity, compared to the waking states ( , bottom; and fig. S1, E and G). However, the time course and presence of individual PEs (fig. S1, B and C) and the dynamics (e.g., inter-PE intervals and frequency) of periodic PEs were not different between sleep and wake (fig. S1, D and F), even if this behavior varied across sleep and wake. A previous study suggested that PEs during sleep are accomplishing a specific function in flies linked to waste clearance and that these might be specific to a deeper sleep stage . We therefore next examined whether PE and antennal periodicity varied throughout a sleep bout. For this, we segmented all >5-min sleep bouts into five temporal epochs, as done previously for spontaneous sleep experiments in tethered flies ( , top schema) . The first 2 min and last 2 min of sleep (flanked by locomotor behavior) were analyzed separately for microbehaviors and compared to “midsleep” epochs, which could be of different durations. To examine whether the likelihood of periodicity for both antennae and proboscis varied on the basis of the sleep epochs, we used multilevel modeling instead of traditional repeated measures of analysis of variance (ANOVA) (as different flies had varying numbers of sleep epochs). To understand whether the likelihood of the periodicity varies by sleep epoch, we defined two models (separately for both antennae and proboscis): a null model, where the likelihood of periodicity depends on the mean per fly, and an epoch model, where the likelihood of periodicity depends on the epoch (e.g., midsleep, etc.). For details, refer to the “Models for antennal and proboscis periodicity” section. For all the microbehaviors, the “epoch” model (where the periodicity depends only on the sleep epoch) emerged as the winning model, and a reliable main effect of epoch was found ( P < 0.001) in all cases. Further, we performed post hoc tests using Tukey adjustment (for multiple comparisons) to identify differences between pairs that are significant. Thus, we found an apparent increase in the likelihood of periodicity for both antennae and proboscis during the middle segments of sleep bouts . This suggested physiological differences that might be detected in the fly brain, so we then performed electrophysiological recordings in a similar context. Long-term multichannel recordings with spontaneous sleep bouts We recorded LFPs across the fly brain using a linear 16-channel electrode inserted into the left eye of flies in a similar context as above, walking (or resting) on an air-supported ball . The electrode insertion location was positioned to sample LFPs from the retina to the central brain ( , white arrowhead) . The depth of insertion of the electrode was optimized using a visual stimulus calibration protocol, based on a reliable LFP polarity reversal identified in the fly inner optic lobes (fig. S2 and see the “Polarity reversal” section). The change in polarity (positive to negative deflections in response to a periodic visual stimulus) was always positioned between electrodes 11 and 13 in all flies, before the start of the long-term LFP recordings. This LFP polarity–based method allowed us to maintain a level of recording consistency across flies in terms of spatial locations of the electrodes, thereby allowing us to compare and combine LFP data across multiple flies. To further ensure reproducible recording locations, we also developed a dye-based registration method (figs. S3 and S4 and see the “Dye-based localization” section) and estimated recording channel locations in the brain for two sample flies. Using this method, we identified three broadly defined brain recording regions to simplify our subsequent analyses : central channels (1 to 5), middle channels (6 to 10), and peripheral channels (12 to 16), here grouped by polarity reversal in channel 11. In addition, for further analysis, as the polarity reversal channel is used for re-referencing, the number of channels used in analysis becomes 15. We used the above calibration steps and recorded LFP data from 16 flies over the course of a day and night cycle ( and see the “Movement analysis” section for data exclusion criteria). We designed our recordings so that experiments were started at different times in different flies to achieve complete coverage of a full day-night cycle. We, however, only examined the first 8 hours of LFP data in each fly , to ensure that we were always recording from active and responsive animals (all 16 flies were still alive after 12 hours). The behavior of the flies was recorded under infrared lighting , and their movements were quantified using a combination of pixel difference and contour thresholding between neighboring frames (see the “Movement analysis” section). As flies are known to be crepuscular in nature (more active in the twilight periods—dawn and dusk), we exploited this activity characteristic to confirm that our subjects were healthy. We analyzed their activity patterns across different crepuscular periods (before and after dawn and dusk periods). For both the dawn and dusk periods, the “crepuscular-type” model (where the movement depends on the crepuscular type; before/after dawn and before/after dusk) emerged as the winning model and a reliable main effect of crepuscular type was found ( P < 0.01 in dawn and P < 0.001 in dusk). Further, we performed post hoc tests using Tukey adjustment (for multiple comparisons) to identify differences between pairs that are significant. We found that movement activity was higher in dawn periods compared to predawn and higher in dusk periods compared to both predusk and postdusk periods . For details, refer to the “Crepuscular analysis” section and “Models for movement pattern across crepuscular periods” section under the “Multilevel models” section. This shows that flies remain healthy and active in the recording preparation. To further confirm that the recording preparation is not detrimental, we compared average activity levels across the 8 hours of recording time for each fly (fig. S5A). We found that flies were on average significantly more active the first hour, but then average activity levels remained the same for the following 7 hours. This suggests that after an initial “settling in” period of increased activity, health remained robust for the duration of the recordings that were used in our sleep/wake analyses. For details, refer to the “Models for movement pattern across recorded hours” section. Sleep was defined by 5-min immobility criteria, based on previous observations in unrestrained flies and tethered flies . Fly mobility along with classification of different behavioral states (“awake” and “sleep”) for an example sleep bout is shown in . Since it was unclear whether flies would even sleep in this multichannel recording preparation, we tallied immobility bout durations across the day and the night for each fly (here, we used 16 hours of video data for each fly all of which survived; see the “Movement analysis” section for data exclusion criteria), expecting that flies should be sleeping more at night on average. We found that flies were able to sleep in this preparation and that nighttime sleep bouts were indeed longer than daytime sleep bouts [median = 22.42 min versus 13.99 min, respectively; t (13) = −2.32, P < 0.05] . This confirms that similar to single-channel LFP recordings , flies slept reliably in this multichannel recording preparation, allowing us to assess changes in LFP activity across the fly brain during sleep and wakefulness and to relate these changes to sleep microbehaviors. Having confirmed that flies are able to sleep in our recording preparation, we next cross-validated the consistency of our LFP recordings across recording hours, to ascertain that LFP quality was not changing as a function of recording duration. We computed the average LFP power spectrum across all channels in the awake and sleep periods across the eight recording hours (hours 1 to 8) for each fly. We found that for both awake and sleep periods, none of the recording hours differed from each other on average (fig. S5, B and C), indicating that an awake or a sleep epoch at the beginning of the recording is quantitatively similar to an awake or sleep epoch at the end of the recording (here, 8th hour) or at other recording hours. This shows that brain activity remains as robust after 8 hours of recording, validating this restricted time frame for our LFP analyses. For details, refer to the “Models for LFP power spectrum across recorded hours” section. LFP differences across the brain during spontaneous sleep and awake Next, we focused on the multichannel data to identify potential differences between sleep and wake across the fly brain, separating our recordings into three broad regions: central, middle, and peripheral . An example sleep bout and its corresponding spectrograms across the central, middle, and peripheral channels reveal increased activity during sleep in the central brain compared to the periphery . In addition, we noted variegated effects in the lower frequencies (5 to 10 Hz) within the sleep bout ( , arrowheads) and significant LFP activity (5 to 40 Hz) associated with locomotion. When we examined sample LFP data more closely across all channels , we observed higher LFP amplitudes in the central and middle channels than in the peripheral channels and more activity during wake than during sleep . The fly brain is not necessarily quiet during sleep, with some channels (e.g., channels 5 to 7) displaying increased activity compared to other channels . To substantiate our observations, we performed spectral analysis on the data. For this purpose, we epoched the LFP data into 60-s bins and computed the power spectrum per epoch per channel [see the “Preprocessing” and “Power spectrum analysis (sleep versus wake)” sections under the “LFP analysis” section]. Since LFP data recorded from flies can be sensitive to physiological artifacts such as heartbeat and body movements , we used a common referencing system (based on a brain-based signal) that allowed for removal of nonbrain-based physiological noise. Plotting the power spectral density across the three different channel groupings for different frequency bands (5 to 40 Hz) revealed consistently greater power in an example fly during wake than during sleep across the entire recording transect . Although decreased LFP power during sleep is consistent with previous findings involving single-channel recordings in flies , it was unexpected to see that even the fly optic lobes are significantly less active during sleep compared to wake, suggesting a brain-wide effect. We next examined more closely the relationship between individual channels and LFP spectral frequency between sleep and wake states. We used nonparametric resampling tools to identify the precise patterns (frequency × channel pairs) differing across awake and sleep at the group level. The outcome of the cluster permutation analysis would be regions of interest (ROI) or clusters across the frequency × channel space that differs between sleep and wake states. For this purpose, we first computed the difference in mean spectral data across wake and sleep for individual flies. Then, we performed a cluster permutation test (flies × frequencies × channels) on the difference between wake and sleep data ( , left) to reveal one significant cluster i.e., ROI (frequency × channel pair) encompassing all frequencies between 5 and 40 Hz and all channels (1 to 15) ( , left). This confirms the spectral results (at group level) in that showed a brain-wide decrease in power during sleep compared to wake. For details, refer to the “Power spectrum analysis (sleep versus wake)” section. We then sought to identify subclasses of frequencies and channels within this significant cluster that might be more specifically associated with sleep. To do this, we computed the effect sizes for every channel × frequency combination ( , right). This revealed an interesting frequency structure distinguishing sleep from wake. This included areas of interest in the 5- to 10-Hz and 25- to 40-Hz range in the central channels (channels 1 to 3). A 7- to 10-Hz frequency effect was identified in a previous study as being relevant to sleep transitions in Drosophila , and the higher 25- to 40-Hz range overlaps with frequencies associated with attention-like behavior in flies . Consistent with previous work, it is, however, clear that LFP activity is mostly decreased during all of sleep compared to wake, even in the 7- to 10-Hz range that has been associated with sleep transitions (fig. S6). LFP differences during induced sleep Sleep can be acutely induced in Drosophila by using optogenetic or thermogenetic activation of sleep-promoting neurons . We were curious whether induced sleep revealed similar effects across the fly brain, following the same statistical approaches used above for spontaneous sleep. For this, we focused on whole-brain recordings taken from 104y-Gal4/UAS-TrpA1 flies, a sleep-promoting line (fig. S7A) that expresses a temperature sensitive cation channel in the fan-shaped body in the central brain and other regions of the brain . As shown in a previous study and other Drosophila sleep studies , activating these neurons with Transient receptor potential A1 (TrpA1) (by increasing the temperature to ~29°C) results in behavioral quiescence and induced sleep, whereas control strains remain awake and active. In these recordings, a different multichannel probe was used (fig. S7B), with 16 recording sites that spanned the entire brain from eye to eye . We preprocessed the induced sleep LFP data (see the “Thermogenetic sleep induction” section) in a similar fashion to our spontaneous sleep LFP data. We first contrasted the mean power spectra per fly under two conditions: baseline and sleep induction (fig. S7C). As above, we then performed a cluster permutation test (flies × frequencies × channels) on the difference between baseline wakefulness and induced sleep to reveal a significant cluster (frequency × channel pair). Thus, we uncovered a significant cluster (fig. S7D) in the central brain channels across all (5 to 40 Hz) frequency bands, whereas the 104y-Gal4/+ control flies did not reveal such a cluster (fig. S7, E and F). Note that sleep induction using this strain yielded an opposite effect to what we found during spontaneous sleep: LFP activity during induced sleep is on average higher than during baseline wakefulness (fig. S7D), while it was lower during spontaneous sleep . In addition, the effect observed during induced sleep was only observed in the central channels, whereas the spontaneous sleep effects appear to at least cover the entire hemisphere from center to periphery. This shows that genetically induced sleep in flies can produce notably different electrophysiological signatures than spontaneous sleep, consistent with several previous similar observations . For the rest of this current study, we focus on spontaneous sleep. Distinct sleep stages identified by machine learning Our earlier analysis of microbehaviors during sleep in this preparation suggests that sleep is not a single phenomenon and that the requisite 5-min immobility criterion might not fully capture potential LFP and behavioral changes that could occur across a sleep bout. There is evidence that sleep quality (via arousal threshold probing) in wild-type Drosophila flies also changes across a bout of quiescence , suggesting that flies transition from lighter to deeper sleep stages. To assess whether this might also be evident in our multichannel recordings, we divided our LFP data (for all channels) into five different temporal segments, analyzing only sleep epochs that were 5 min or longer : (i) “presleep”: the 2 min (−2 to 0 min) before flies stopped moving; (ii) “earlysleep”: the first 2 min (0 to 2 min) after the start of a sleep bout; (iii) “latesleep”: the last 2 min of sleep before mobility resumed; (iv) midsleep: any time between earlysleep and latesleep; and (v) awake: the rest of our LFP data. Our partitioning of the LFP data matches a similar partitioning applied to whole-brain calcium imaging of flies engaged in spontaneous sleep . To examine how LFP-based signatures change within a sleep bout, we decided to perform a hypothesis-agnostic analysis through machine learning techniques. To perform this machine learning–based classification, we first used SVM-based techniques. Briefly, SVM belongs to a class of supervised learning model, which is composed of building a hyperplane or set of hyperplanes in a high-dimensional space (using the kernel trick for nonlinear mapping functions) with the goal to maximize the separation distance between the closest data point (in the training dataset) of any class (functional margin) . The choice of the optimal hyperplane is made in such a way that the generalization error would be lower for the new data points in the test dataset (fig. S8A). For detailed steps for preprocessing of data and implementation of classifiers, refer to the “Sleep staging by classifiers” section. The probabilistic prediction per class per iteration is shown in . It is interesting to note several points. First, the probability of awake data is ~0.7, and that of midsleep is ~0.0, indicating that the classifier performs well on classes that it has already been trained on. Second, at the epoch −2 to −1 min, when the fly is still moving (yellow circles), LFP data indicate that it is closer to resembling sleep (<0.5), before dropping fast to ~0.3 (turquoise circles) in the first 2 min of sleep. The above analysis indicates that with this approach, we could predict the probability that a fly will fall asleep 2 min before the start of the immobility period. Just 1 min before flies fall asleep, the LFP data indicate a brief moment more closely resembling wake (yellow circles), perhaps associated with grooming periods [observed in honeybees, for example ]. The first 2 min of sleep (turquoise circles) reveals a probability metric halfway between midsleep and wake, suggesting either a gradual descent into deeper sleep or a distinct sleep stage. Last, at the epoch from x − 2 to x − 1 min before mobility resumes (brown circles), the probability metric returns to a similar level as early sleep. Immediately after mobility resumes, the LFP data are classified as no different from awake, i.e., there is no postsleep ambiguity. Note that only the awake and midsleep data have been seen by the classifier, the rest of the data −4 to +2 min and x − 2 to x + 2 min have never been seen by the classifier. In addition, midsleep collapses a wide range of different sleep durations in different flies, so it could still be averaging different sleep states within. Nevertheless, our results suggest that broadly dichotomizing midsleep and wake identifies other sleep (and wake) stages that resemble neither. To confirm this, we next examined whether midsleep episodes of different durations are different from each other. We first plotted the different durations of classes of the midsleep episodes . On the basis of a distribution centered around a median, we defined midsleep episodes of <14 min as short midsleep and >14 min as long midsleep. We next used an SVM-based classifier (as before) but trained to distinguish between short and long midsleeps. We identified the probability estimates of the short midsleep class on both the short and long midsleep categories. If short and long midsleeps are different from each other, then they should follow two characteristics, similar to those established by the classifier trained on awake versus midsleep. They are the following: (i) The awake class displayed probability values ~0.7 and midsleep around ~0.0, so the values of the trained classes were as different from each other and different from 0.5 (chance) as well, indicating that the classifier has identified features able to differentiate between awake and midsleep classes. (ii) The awake and midsleep class probability values differed significantly from each other (indicating stability of values) across different classifier train/test iterations. When these two criteria were applied to the case of classifier trained on short midsleep versus long midsleep, they satisfy the latter criteria (significantly different and thus stable classifier performance) but not the former criteria as short midsleep values of ~0.4 and long midsleep around ~0.0, and this suggests that the classifier did not find features that are clearly able to differentiate short and long midsleep classes and, hence, different midsleep durations display similar LFP qualities across the fly brain. Whether these include intercalated epochs of different quality sleep remains an open question. Model-based spectral analysis across different channels Having revealed how multichannel LFP data can be used to differentiate across different temporal stages of sleep, we next decided to identify what channels might be important for revealing this. For this purpose, we used a multilevel modeling approach. To reveal how spectral data might change throughout the fly brain across a sleep bout, we calculated the mean spectral power for each of the aforementioned epochs and pooled data from central, middle, and peripheral channels. Because different flies had varying numbers of sleep epochs, we used multilevel models instead of traditional repeated measures of ANOVA. For details, refer to the “Models for spectral analysis” section. To understand the modulation of the LFP power spectrum by sleep epoch, we defined multiple models: a null model, where the power spectrum depends on the mean per fly; an epoch model, where power spectrum depends on the LFP epoch type (wake or sleep); a channel model, where power spectrum depends on the LFP channel (central, middle, or peripheral); an epoch channel model, where power spectrum depends on a combination of epoch type and LFP channel type. The “epoch channel” model emerged as the winning model. In the epoch channel model, we found that there was a reliable main effect of both epoch ( P < 0.001) and channel ( P < 0.001) on power spectrum and the interaction between epoch and channel also had a reliable effect ( P < 0.001) on power spectrum. In summary, the above model-based analysis confirms that the power spectrum of the LFP data varies on the basis of the channel location and the epoch state of the fly. We then proceeded to examine more closely how differences in the sleep LFP might be segregated across the fly brain using post hoc tests (using Tukey adjustment for multiple comparisons) from the epoch channel model. In the central channels, the awake data were significantly different compared to all sleep categories and critically were also different to the presleep data. Note that, behaviorally, the fly is still considered awake in the presleep period (i.e., it is still moving). Thus, the ability to predict sleep at least 2 min before the onset of immobility, which was revealed in our SVM analysis , might be explained by these significant spectral differences only observed in the central channels. In the middle channels, the awake data were also significantly different across all sleep categories but was not different to the presleep data. Further, the presleep period was significantly different from earlysleep, midsleep, and latesleep periods. In the peripheral channels, the awake data were significantly different across all sleep categories but were again not different to the presleep data. Together, mean power spectral data across different channels were thus able to differentiate between awake, presleep, and different sleep epochs of sleep. However, the post hoc analysis did not differentiate among sleep epochs (earlysleep, midsleep, and latesleep). Since this is inconsistent with previous findings using single glass electrodes , we questioned whether the pooling of channel × frequency data (three broad brain regions × one overall power spectrum) could be hiding more specific effects that might become evident with the full (15 × 145) dimension of channels × frequencies. LFP features across different temporal stages of sleep Having established the existence of different temporal stages of sleep using a classifier based on SVM and confirming the same using model-based analysis, we were next interested in the features in the LFP data (which channels at what frequencies are important for distinguishing epochs within a sleep bout) that help us differentiate these stages. For this purpose, we used random forest classifiers. A random forest classifier is a class of supervised learning algorithms that uses an ensemble of multiple decision trees for classification/regression. This could be illustrated by an example . In the first step, subsets of training data (#1 to # n ) were created by making a random sample of size N with replacement. This allows for the ensemble of decision trees (#1 to # n ) to be decorrelated, and the process of this random sampling is called bagging (bootstrap aggregation). In the second step, each decision tree (#1 to # n ) picks only a random subsample of features (feature randomness) instead of all features (again allowing for the decision trees to be decorrelated). In the final step, all the decision trees create individual predictions of classes, and the final outcome would be resolved by simple majority voting (illustrated here with a goal of classifying awake versus sleep). Thus, bagging and feature randomness allow for the random forest to perform better than individual decision trees. Furthermore, we also computed classifier performance metrics (see the “Classifier metrics” section) such as precision, recall, F1 score, and normalized confusion matrix for evaluation. We also used a permutation importance technique (see the “Multiclass random forest classifier analysis and feature importance” section) to identify the relative importance of features in the performance of classifiers, thereby identifying physiological features (channels × frequency) that are important for differentiating across categories. We first decided to use the random forest classifiers to determine whether there were any differences between day and night sleep, hence performing classification across the classes: “daysleep” and “nightsleep.” We identified LFP features discriminating across daysleep and nightsleep in the peripheral channels across frequency bands (10 to 30 Hz), consistent with a previous study using single-channel LFP . We also computed the normalized confusion matrix , which revealed excellent performance in predicting the daysleep and nightsleep classes. Classifier performance metrics across the daysleep and nightsleep classes shown in also indicate good performance across classes (>0.9). We then performed a multiclass classification of the following classes: awake, presleep, earlysleep, midsleep, latesleep, and identified important LFP features discriminating across categories. The most important features fall within a narrow range of channels (1 to 3) and frequencies (5 to 10 Hz). This indicates that the 5- to 10-Hz frequency range within the central channels is the most important in resolving different sleep stages. We also computed the normalized confusion matrix , which revealed excellent performance in predicting the multiple classes (green boxes). This indicates that classifier features (channels × frequency) are sufficient to distinguish multiple sleep stages (classes) and furthermore provide direct evidence of multiple sleep stages with distinct frequency components. Classifier performance metrics across the target classes shown in also indicate good performance across classes for the different sleep segments (>0.9). Last, we also cross-validated the utility of the permutation-based technique in identifying important features across epochs. For this purpose, we created a multiclass random forest classifier, with target classes as awake, sleep, and identified the features that are important in this classifier (fig. S9A). The most important features are actually distributed evenly among all the features (channels × frequency), thus cross-validating our previous clustering results ( , left), wherein we showed that the LFP differences across awake and sleep are distributed across all channels and frequencies. PE behavior during sleep in multichannel recordings Earlier, we identified that rhythmic PEs during midsleep , which we propose, describe a distinct sleep stage in Drosophila . However, it is unclear whether brain activity associated with PEs is sleep-like or PE-specific. This distinction is important, as it would disambiguate a unique brain state (deep sleep) from a specific behavior associated with that state (PEs). To identify PEs in our electrophysiological dataset, we again used DeepLabCut to track different body parts of the fly . We further used multiple classifiers based on the tracking data, followed by manual verification to identify the PEs. Sample PEs in an example fly along with a few of the features ( x , y proboscis location, likelihood of location, and distance of proboscis to eye) are shown in . For more details on the proboscis detection steps, refer to the “Proboscis tracking for flies on electrophysiology setup” section. Our classifier accuracy was over 80% for most flies : The ground truth was validation by a human observer on classifier detected events. In , we plot the mean proboscis to eye distance for all the flies averaged across awake and sleep bouts. As described earlier for flies without implanted electrodes, PEs executed during wake and sleep are behaviorally similar and, hence, would be difficult to distinguish from each other using video alone. Similar to our behavioral dataset, PE events usually occur in rhythmic bouts, rather than single events. In , we plot the interproboscis interval period, which is the interval between consecutive PE events in a single proboscis bout. Most proboscis events occur within 1.8 s (95th percentile) of each other. As shown before in our behavioral data without implanted electrodes, the interproboscis interval does not vary across awake and sleep periods. Next in , we decided to probe the number of single (one PE event) and multiple (>1 PE event) across different flies. We found that occurrences of single PE events (both across wake and sleep periods) are significantly lower than multiple PE events using a pairwise t test with t (13) = 3.72, P < 0.01. To further illustrate this point in , we plotted the burst length of a PE event (number of extension events within a PE bout) and found that only 33% of the events are single PE, while the rest are multiple PE events. Overall, our investigation of PEs in this multichannel recording dataset is in concurrence with our first (electrode-free) dataset, suggesting that inserting probe into the fly brain does not alter several measures associated with this microbehavior. Previous work has linked PEs with a deep sleep stage in flies . We therefore next investigated whether the number of PEs varied across a sleep bout in our LFP recording dataset, as suggested in our purely behavioral dataset . We found that more PE events occur after 5 min of a sleep bout, compared to those occurring before the 5th min of sleep [pairwise t test, t (12) = −2.8, P < 0.05], suggesting that PEs indeed predominate during later stages of sleep. We also compared PEs immediately after flies had awakened from sleep, which revealed no significant difference [pairwise t test, t (13) = −1.92, P > 0.05] between PE bouts occurring after the 5th min of an awake bout compared to those occurring before the 5th min of an awake bout, confirming that transitions into sleep (rather than transitions back to wake) were associated with increased PE events. We next asked whether the number of PE events changed across a sleep bout in our multichannel recording preparation. To determine whether the PE event count varies across different temporal sleep stages , we used multilevel models. For details, refer to the “Models for PE event counts” section. The time_label model (where the PE event count depends only on the specific temporal sleep stage) emerged as the winning model. Further, we performed post hoc tests using Tukey adjustment (for multiple comparisons) to identify differences between pairs that are significant. We found that PE events occur more often in midsleep compared to other sleep stages. Returning to our original observation that most PEs occur after 5 min of sleep, we plotted the distribution of PE events occur in the midsleep epoch across all flies and found that 95 percentile of all PE events in midsleep indeed occur after 2.5 min of the midsleep epoch (thus, 4.5 min from sleep onset). LFP features of a deep sleep stage with PEs We next questioned whether PEs occurring during sleep and wake had similar neural correlates or whether the sleep-related events were indeed different and thus indicative of a unique sleep-related function. We therefore focused on the multichannel data to identify any differences in the LFP activity associated with PEs during wake and sleep epochs. We first identified the PE periods (refer to the “Identification of proboscis periods” section), extracted the LFP data, and epoched them into 1-s bins. Second, we used spectral analysis to determine whether epochs characterized by PEs differ in frequencies across different channels for wake compared to sleep. For this purpose, we computed the spectral power for every 1-s epoch per channel (see the “Power spectrum analysis” section), using as before a common reference system for re-referencing the LFP data. Third, we used nonparametric resampling tools to identify the precise patterns (frequency × channel pairs) differing in proboscis periods within awake and sleep at the group level. For this purpose, we first computed the difference in mean spectral data across nonproboscis periods (awake or sleep) and proboscis periods (awake proboscis and sleep proboscis, respectively) for individual flies. We then performed a cluster permutation test (flies × frequencies × channels) on the difference data to reveal significant ROIs or clusters (frequency × channel pair). In , we show the difference data for awake PE events (awake proboscis–awake period) and clustering analysis, which reveals a significant cluster in the middle channels (channels 6 to 10) across all frequencies. Further, within the significant cluster, we also performed a post hoc analysis, revealing that spectral activity within the awake proboscis periods is lower than awake periods. In , we show the difference data for sleep PE events (sleep proboscis–sleep period), and clustering analysis reveals a significant cluster in the central channels (1 to 5) across higher frequencies (32 to 40 Hz). Further, within the significant cluster, we also performed a post hoc analysis, revealing that spectral activity within the sleep proboscis periods is higher than sleep periods (in contrast to the awake proboscis periods). In , we directly compared the awake and sleep proboscis periods and showed the difference data (awake proboscis–sleep proboscis) and clustering analysis, which reveals a significant cluster in the central and middle channels (channels 1 to 9) across higher frequencies (25 to 40 Hz). Further, within the significant cluster, we also performed a post hoc analysis, revealing that spectral activity within the sleep proboscis periods is lower than awake proboscis periods. This suggests that PEs occurring during sleep are qualitatively different from identical PE events occurring during wake. This suggests that the brain activity state [e.g., quiet or deep sleep ] overrides the neural correlates associated with the same behavior occurring during wake. Before conducting any electrophysiological recordings, we first investigated how flies slept when tethered to a rigid metal post while being able to walk on an air-supported ball . Flies were filmed overnight under infrared illumination, and locomotory behavior was quantified using a pixel subtraction method to identify sleep epochs, defined by the absence of locomotion or grooming behavior for 5 min or more . We also tracked the movement of different body parts, including the proboscis, antennae, and abdomen to detect potential microbehaviors during sleep. For this, we used machine learning [DeepLabCut ] to train a classifier to track microbehavioral movements through wake and sleep . As shown previously , tethered flies were able to sleep in this context ( and fig. S1A). Consistent with a previous study , we also observed regular PEs during sleep bouts (fig. S1B), which often occurred in rhythmic succession ( , orange trace). We also observed antennal movements and found that these were periodic in a subset of flies ( , red trace). Since both antennal movements and PEs were often rhythmic during sleep, we characterized both microbehaviors in the frequency domain ( , top) to determine whether these were different between sleep and wake. We found that a greater proportion of the sleeping states displayed both antennal periodicity and PE periodicity, compared to the waking states ( , bottom; and fig. S1, E and G). However, the time course and presence of individual PEs (fig. S1, B and C) and the dynamics (e.g., inter-PE intervals and frequency) of periodic PEs were not different between sleep and wake (fig. S1, D and F), even if this behavior varied across sleep and wake. A previous study suggested that PEs during sleep are accomplishing a specific function in flies linked to waste clearance and that these might be specific to a deeper sleep stage . We therefore next examined whether PE and antennal periodicity varied throughout a sleep bout. For this, we segmented all >5-min sleep bouts into five temporal epochs, as done previously for spontaneous sleep experiments in tethered flies ( , top schema) . The first 2 min and last 2 min of sleep (flanked by locomotor behavior) were analyzed separately for microbehaviors and compared to “midsleep” epochs, which could be of different durations. To examine whether the likelihood of periodicity for both antennae and proboscis varied on the basis of the sleep epochs, we used multilevel modeling instead of traditional repeated measures of analysis of variance (ANOVA) (as different flies had varying numbers of sleep epochs). To understand whether the likelihood of the periodicity varies by sleep epoch, we defined two models (separately for both antennae and proboscis): a null model, where the likelihood of periodicity depends on the mean per fly, and an epoch model, where the likelihood of periodicity depends on the epoch (e.g., midsleep, etc.). For details, refer to the “Models for antennal and proboscis periodicity” section. For all the microbehaviors, the “epoch” model (where the periodicity depends only on the sleep epoch) emerged as the winning model, and a reliable main effect of epoch was found ( P < 0.001) in all cases. Further, we performed post hoc tests using Tukey adjustment (for multiple comparisons) to identify differences between pairs that are significant. Thus, we found an apparent increase in the likelihood of periodicity for both antennae and proboscis during the middle segments of sleep bouts . This suggested physiological differences that might be detected in the fly brain, so we then performed electrophysiological recordings in a similar context. We recorded LFPs across the fly brain using a linear 16-channel electrode inserted into the left eye of flies in a similar context as above, walking (or resting) on an air-supported ball . The electrode insertion location was positioned to sample LFPs from the retina to the central brain ( , white arrowhead) . The depth of insertion of the electrode was optimized using a visual stimulus calibration protocol, based on a reliable LFP polarity reversal identified in the fly inner optic lobes (fig. S2 and see the “Polarity reversal” section). The change in polarity (positive to negative deflections in response to a periodic visual stimulus) was always positioned between electrodes 11 and 13 in all flies, before the start of the long-term LFP recordings. This LFP polarity–based method allowed us to maintain a level of recording consistency across flies in terms of spatial locations of the electrodes, thereby allowing us to compare and combine LFP data across multiple flies. To further ensure reproducible recording locations, we also developed a dye-based registration method (figs. S3 and S4 and see the “Dye-based localization” section) and estimated recording channel locations in the brain for two sample flies. Using this method, we identified three broadly defined brain recording regions to simplify our subsequent analyses : central channels (1 to 5), middle channels (6 to 10), and peripheral channels (12 to 16), here grouped by polarity reversal in channel 11. In addition, for further analysis, as the polarity reversal channel is used for re-referencing, the number of channels used in analysis becomes 15. We used the above calibration steps and recorded LFP data from 16 flies over the course of a day and night cycle ( and see the “Movement analysis” section for data exclusion criteria). We designed our recordings so that experiments were started at different times in different flies to achieve complete coverage of a full day-night cycle. We, however, only examined the first 8 hours of LFP data in each fly , to ensure that we were always recording from active and responsive animals (all 16 flies were still alive after 12 hours). The behavior of the flies was recorded under infrared lighting , and their movements were quantified using a combination of pixel difference and contour thresholding between neighboring frames (see the “Movement analysis” section). As flies are known to be crepuscular in nature (more active in the twilight periods—dawn and dusk), we exploited this activity characteristic to confirm that our subjects were healthy. We analyzed their activity patterns across different crepuscular periods (before and after dawn and dusk periods). For both the dawn and dusk periods, the “crepuscular-type” model (where the movement depends on the crepuscular type; before/after dawn and before/after dusk) emerged as the winning model and a reliable main effect of crepuscular type was found ( P < 0.01 in dawn and P < 0.001 in dusk). Further, we performed post hoc tests using Tukey adjustment (for multiple comparisons) to identify differences between pairs that are significant. We found that movement activity was higher in dawn periods compared to predawn and higher in dusk periods compared to both predusk and postdusk periods . For details, refer to the “Crepuscular analysis” section and “Models for movement pattern across crepuscular periods” section under the “Multilevel models” section. This shows that flies remain healthy and active in the recording preparation. To further confirm that the recording preparation is not detrimental, we compared average activity levels across the 8 hours of recording time for each fly (fig. S5A). We found that flies were on average significantly more active the first hour, but then average activity levels remained the same for the following 7 hours. This suggests that after an initial “settling in” period of increased activity, health remained robust for the duration of the recordings that were used in our sleep/wake analyses. For details, refer to the “Models for movement pattern across recorded hours” section. Sleep was defined by 5-min immobility criteria, based on previous observations in unrestrained flies and tethered flies . Fly mobility along with classification of different behavioral states (“awake” and “sleep”) for an example sleep bout is shown in . Since it was unclear whether flies would even sleep in this multichannel recording preparation, we tallied immobility bout durations across the day and the night for each fly (here, we used 16 hours of video data for each fly all of which survived; see the “Movement analysis” section for data exclusion criteria), expecting that flies should be sleeping more at night on average. We found that flies were able to sleep in this preparation and that nighttime sleep bouts were indeed longer than daytime sleep bouts [median = 22.42 min versus 13.99 min, respectively; t (13) = −2.32, P < 0.05] . This confirms that similar to single-channel LFP recordings , flies slept reliably in this multichannel recording preparation, allowing us to assess changes in LFP activity across the fly brain during sleep and wakefulness and to relate these changes to sleep microbehaviors. Having confirmed that flies are able to sleep in our recording preparation, we next cross-validated the consistency of our LFP recordings across recording hours, to ascertain that LFP quality was not changing as a function of recording duration. We computed the average LFP power spectrum across all channels in the awake and sleep periods across the eight recording hours (hours 1 to 8) for each fly. We found that for both awake and sleep periods, none of the recording hours differed from each other on average (fig. S5, B and C), indicating that an awake or a sleep epoch at the beginning of the recording is quantitatively similar to an awake or sleep epoch at the end of the recording (here, 8th hour) or at other recording hours. This shows that brain activity remains as robust after 8 hours of recording, validating this restricted time frame for our LFP analyses. For details, refer to the “Models for LFP power spectrum across recorded hours” section. Next, we focused on the multichannel data to identify potential differences between sleep and wake across the fly brain, separating our recordings into three broad regions: central, middle, and peripheral . An example sleep bout and its corresponding spectrograms across the central, middle, and peripheral channels reveal increased activity during sleep in the central brain compared to the periphery . In addition, we noted variegated effects in the lower frequencies (5 to 10 Hz) within the sleep bout ( , arrowheads) and significant LFP activity (5 to 40 Hz) associated with locomotion. When we examined sample LFP data more closely across all channels , we observed higher LFP amplitudes in the central and middle channels than in the peripheral channels and more activity during wake than during sleep . The fly brain is not necessarily quiet during sleep, with some channels (e.g., channels 5 to 7) displaying increased activity compared to other channels . To substantiate our observations, we performed spectral analysis on the data. For this purpose, we epoched the LFP data into 60-s bins and computed the power spectrum per epoch per channel [see the “Preprocessing” and “Power spectrum analysis (sleep versus wake)” sections under the “LFP analysis” section]. Since LFP data recorded from flies can be sensitive to physiological artifacts such as heartbeat and body movements , we used a common referencing system (based on a brain-based signal) that allowed for removal of nonbrain-based physiological noise. Plotting the power spectral density across the three different channel groupings for different frequency bands (5 to 40 Hz) revealed consistently greater power in an example fly during wake than during sleep across the entire recording transect . Although decreased LFP power during sleep is consistent with previous findings involving single-channel recordings in flies , it was unexpected to see that even the fly optic lobes are significantly less active during sleep compared to wake, suggesting a brain-wide effect. We next examined more closely the relationship between individual channels and LFP spectral frequency between sleep and wake states. We used nonparametric resampling tools to identify the precise patterns (frequency × channel pairs) differing across awake and sleep at the group level. The outcome of the cluster permutation analysis would be regions of interest (ROI) or clusters across the frequency × channel space that differs between sleep and wake states. For this purpose, we first computed the difference in mean spectral data across wake and sleep for individual flies. Then, we performed a cluster permutation test (flies × frequencies × channels) on the difference between wake and sleep data ( , left) to reveal one significant cluster i.e., ROI (frequency × channel pair) encompassing all frequencies between 5 and 40 Hz and all channels (1 to 15) ( , left). This confirms the spectral results (at group level) in that showed a brain-wide decrease in power during sleep compared to wake. For details, refer to the “Power spectrum analysis (sleep versus wake)” section. We then sought to identify subclasses of frequencies and channels within this significant cluster that might be more specifically associated with sleep. To do this, we computed the effect sizes for every channel × frequency combination ( , right). This revealed an interesting frequency structure distinguishing sleep from wake. This included areas of interest in the 5- to 10-Hz and 25- to 40-Hz range in the central channels (channels 1 to 3). A 7- to 10-Hz frequency effect was identified in a previous study as being relevant to sleep transitions in Drosophila , and the higher 25- to 40-Hz range overlaps with frequencies associated with attention-like behavior in flies . Consistent with previous work, it is, however, clear that LFP activity is mostly decreased during all of sleep compared to wake, even in the 7- to 10-Hz range that has been associated with sleep transitions (fig. S6). Sleep can be acutely induced in Drosophila by using optogenetic or thermogenetic activation of sleep-promoting neurons . We were curious whether induced sleep revealed similar effects across the fly brain, following the same statistical approaches used above for spontaneous sleep. For this, we focused on whole-brain recordings taken from 104y-Gal4/UAS-TrpA1 flies, a sleep-promoting line (fig. S7A) that expresses a temperature sensitive cation channel in the fan-shaped body in the central brain and other regions of the brain . As shown in a previous study and other Drosophila sleep studies , activating these neurons with Transient receptor potential A1 (TrpA1) (by increasing the temperature to ~29°C) results in behavioral quiescence and induced sleep, whereas control strains remain awake and active. In these recordings, a different multichannel probe was used (fig. S7B), with 16 recording sites that spanned the entire brain from eye to eye . We preprocessed the induced sleep LFP data (see the “Thermogenetic sleep induction” section) in a similar fashion to our spontaneous sleep LFP data. We first contrasted the mean power spectra per fly under two conditions: baseline and sleep induction (fig. S7C). As above, we then performed a cluster permutation test (flies × frequencies × channels) on the difference between baseline wakefulness and induced sleep to reveal a significant cluster (frequency × channel pair). Thus, we uncovered a significant cluster (fig. S7D) in the central brain channels across all (5 to 40 Hz) frequency bands, whereas the 104y-Gal4/+ control flies did not reveal such a cluster (fig. S7, E and F). Note that sleep induction using this strain yielded an opposite effect to what we found during spontaneous sleep: LFP activity during induced sleep is on average higher than during baseline wakefulness (fig. S7D), while it was lower during spontaneous sleep . In addition, the effect observed during induced sleep was only observed in the central channels, whereas the spontaneous sleep effects appear to at least cover the entire hemisphere from center to periphery. This shows that genetically induced sleep in flies can produce notably different electrophysiological signatures than spontaneous sleep, consistent with several previous similar observations . For the rest of this current study, we focus on spontaneous sleep. Our earlier analysis of microbehaviors during sleep in this preparation suggests that sleep is not a single phenomenon and that the requisite 5-min immobility criterion might not fully capture potential LFP and behavioral changes that could occur across a sleep bout. There is evidence that sleep quality (via arousal threshold probing) in wild-type Drosophila flies also changes across a bout of quiescence , suggesting that flies transition from lighter to deeper sleep stages. To assess whether this might also be evident in our multichannel recordings, we divided our LFP data (for all channels) into five different temporal segments, analyzing only sleep epochs that were 5 min or longer : (i) “presleep”: the 2 min (−2 to 0 min) before flies stopped moving; (ii) “earlysleep”: the first 2 min (0 to 2 min) after the start of a sleep bout; (iii) “latesleep”: the last 2 min of sleep before mobility resumed; (iv) midsleep: any time between earlysleep and latesleep; and (v) awake: the rest of our LFP data. Our partitioning of the LFP data matches a similar partitioning applied to whole-brain calcium imaging of flies engaged in spontaneous sleep . To examine how LFP-based signatures change within a sleep bout, we decided to perform a hypothesis-agnostic analysis through machine learning techniques. To perform this machine learning–based classification, we first used SVM-based techniques. Briefly, SVM belongs to a class of supervised learning model, which is composed of building a hyperplane or set of hyperplanes in a high-dimensional space (using the kernel trick for nonlinear mapping functions) with the goal to maximize the separation distance between the closest data point (in the training dataset) of any class (functional margin) . The choice of the optimal hyperplane is made in such a way that the generalization error would be lower for the new data points in the test dataset (fig. S8A). For detailed steps for preprocessing of data and implementation of classifiers, refer to the “Sleep staging by classifiers” section. The probabilistic prediction per class per iteration is shown in . It is interesting to note several points. First, the probability of awake data is ~0.7, and that of midsleep is ~0.0, indicating that the classifier performs well on classes that it has already been trained on. Second, at the epoch −2 to −1 min, when the fly is still moving (yellow circles), LFP data indicate that it is closer to resembling sleep (<0.5), before dropping fast to ~0.3 (turquoise circles) in the first 2 min of sleep. The above analysis indicates that with this approach, we could predict the probability that a fly will fall asleep 2 min before the start of the immobility period. Just 1 min before flies fall asleep, the LFP data indicate a brief moment more closely resembling wake (yellow circles), perhaps associated with grooming periods [observed in honeybees, for example ]. The first 2 min of sleep (turquoise circles) reveals a probability metric halfway between midsleep and wake, suggesting either a gradual descent into deeper sleep or a distinct sleep stage. Last, at the epoch from x − 2 to x − 1 min before mobility resumes (brown circles), the probability metric returns to a similar level as early sleep. Immediately after mobility resumes, the LFP data are classified as no different from awake, i.e., there is no postsleep ambiguity. Note that only the awake and midsleep data have been seen by the classifier, the rest of the data −4 to +2 min and x − 2 to x + 2 min have never been seen by the classifier. In addition, midsleep collapses a wide range of different sleep durations in different flies, so it could still be averaging different sleep states within. Nevertheless, our results suggest that broadly dichotomizing midsleep and wake identifies other sleep (and wake) stages that resemble neither. To confirm this, we next examined whether midsleep episodes of different durations are different from each other. We first plotted the different durations of classes of the midsleep episodes . On the basis of a distribution centered around a median, we defined midsleep episodes of <14 min as short midsleep and >14 min as long midsleep. We next used an SVM-based classifier (as before) but trained to distinguish between short and long midsleeps. We identified the probability estimates of the short midsleep class on both the short and long midsleep categories. If short and long midsleeps are different from each other, then they should follow two characteristics, similar to those established by the classifier trained on awake versus midsleep. They are the following: (i) The awake class displayed probability values ~0.7 and midsleep around ~0.0, so the values of the trained classes were as different from each other and different from 0.5 (chance) as well, indicating that the classifier has identified features able to differentiate between awake and midsleep classes. (ii) The awake and midsleep class probability values differed significantly from each other (indicating stability of values) across different classifier train/test iterations. When these two criteria were applied to the case of classifier trained on short midsleep versus long midsleep, they satisfy the latter criteria (significantly different and thus stable classifier performance) but not the former criteria as short midsleep values of ~0.4 and long midsleep around ~0.0, and this suggests that the classifier did not find features that are clearly able to differentiate short and long midsleep classes and, hence, different midsleep durations display similar LFP qualities across the fly brain. Whether these include intercalated epochs of different quality sleep remains an open question. Having revealed how multichannel LFP data can be used to differentiate across different temporal stages of sleep, we next decided to identify what channels might be important for revealing this. For this purpose, we used a multilevel modeling approach. To reveal how spectral data might change throughout the fly brain across a sleep bout, we calculated the mean spectral power for each of the aforementioned epochs and pooled data from central, middle, and peripheral channels. Because different flies had varying numbers of sleep epochs, we used multilevel models instead of traditional repeated measures of ANOVA. For details, refer to the “Models for spectral analysis” section. To understand the modulation of the LFP power spectrum by sleep epoch, we defined multiple models: a null model, where the power spectrum depends on the mean per fly; an epoch model, where power spectrum depends on the LFP epoch type (wake or sleep); a channel model, where power spectrum depends on the LFP channel (central, middle, or peripheral); an epoch channel model, where power spectrum depends on a combination of epoch type and LFP channel type. The “epoch channel” model emerged as the winning model. In the epoch channel model, we found that there was a reliable main effect of both epoch ( P < 0.001) and channel ( P < 0.001) on power spectrum and the interaction between epoch and channel also had a reliable effect ( P < 0.001) on power spectrum. In summary, the above model-based analysis confirms that the power spectrum of the LFP data varies on the basis of the channel location and the epoch state of the fly. We then proceeded to examine more closely how differences in the sleep LFP might be segregated across the fly brain using post hoc tests (using Tukey adjustment for multiple comparisons) from the epoch channel model. In the central channels, the awake data were significantly different compared to all sleep categories and critically were also different to the presleep data. Note that, behaviorally, the fly is still considered awake in the presleep period (i.e., it is still moving). Thus, the ability to predict sleep at least 2 min before the onset of immobility, which was revealed in our SVM analysis , might be explained by these significant spectral differences only observed in the central channels. In the middle channels, the awake data were also significantly different across all sleep categories but was not different to the presleep data. Further, the presleep period was significantly different from earlysleep, midsleep, and latesleep periods. In the peripheral channels, the awake data were significantly different across all sleep categories but were again not different to the presleep data. Together, mean power spectral data across different channels were thus able to differentiate between awake, presleep, and different sleep epochs of sleep. However, the post hoc analysis did not differentiate among sleep epochs (earlysleep, midsleep, and latesleep). Since this is inconsistent with previous findings using single glass electrodes , we questioned whether the pooling of channel × frequency data (three broad brain regions × one overall power spectrum) could be hiding more specific effects that might become evident with the full (15 × 145) dimension of channels × frequencies. Having established the existence of different temporal stages of sleep using a classifier based on SVM and confirming the same using model-based analysis, we were next interested in the features in the LFP data (which channels at what frequencies are important for distinguishing epochs within a sleep bout) that help us differentiate these stages. For this purpose, we used random forest classifiers. A random forest classifier is a class of supervised learning algorithms that uses an ensemble of multiple decision trees for classification/regression. This could be illustrated by an example . In the first step, subsets of training data (#1 to # n ) were created by making a random sample of size N with replacement. This allows for the ensemble of decision trees (#1 to # n ) to be decorrelated, and the process of this random sampling is called bagging (bootstrap aggregation). In the second step, each decision tree (#1 to # n ) picks only a random subsample of features (feature randomness) instead of all features (again allowing for the decision trees to be decorrelated). In the final step, all the decision trees create individual predictions of classes, and the final outcome would be resolved by simple majority voting (illustrated here with a goal of classifying awake versus sleep). Thus, bagging and feature randomness allow for the random forest to perform better than individual decision trees. Furthermore, we also computed classifier performance metrics (see the “Classifier metrics” section) such as precision, recall, F1 score, and normalized confusion matrix for evaluation. We also used a permutation importance technique (see the “Multiclass random forest classifier analysis and feature importance” section) to identify the relative importance of features in the performance of classifiers, thereby identifying physiological features (channels × frequency) that are important for differentiating across categories. We first decided to use the random forest classifiers to determine whether there were any differences between day and night sleep, hence performing classification across the classes: “daysleep” and “nightsleep.” We identified LFP features discriminating across daysleep and nightsleep in the peripheral channels across frequency bands (10 to 30 Hz), consistent with a previous study using single-channel LFP . We also computed the normalized confusion matrix , which revealed excellent performance in predicting the daysleep and nightsleep classes. Classifier performance metrics across the daysleep and nightsleep classes shown in also indicate good performance across classes (>0.9). We then performed a multiclass classification of the following classes: awake, presleep, earlysleep, midsleep, latesleep, and identified important LFP features discriminating across categories. The most important features fall within a narrow range of channels (1 to 3) and frequencies (5 to 10 Hz). This indicates that the 5- to 10-Hz frequency range within the central channels is the most important in resolving different sleep stages. We also computed the normalized confusion matrix , which revealed excellent performance in predicting the multiple classes (green boxes). This indicates that classifier features (channels × frequency) are sufficient to distinguish multiple sleep stages (classes) and furthermore provide direct evidence of multiple sleep stages with distinct frequency components. Classifier performance metrics across the target classes shown in also indicate good performance across classes for the different sleep segments (>0.9). Last, we also cross-validated the utility of the permutation-based technique in identifying important features across epochs. For this purpose, we created a multiclass random forest classifier, with target classes as awake, sleep, and identified the features that are important in this classifier (fig. S9A). The most important features are actually distributed evenly among all the features (channels × frequency), thus cross-validating our previous clustering results ( , left), wherein we showed that the LFP differences across awake and sleep are distributed across all channels and frequencies. Earlier, we identified that rhythmic PEs during midsleep , which we propose, describe a distinct sleep stage in Drosophila . However, it is unclear whether brain activity associated with PEs is sleep-like or PE-specific. This distinction is important, as it would disambiguate a unique brain state (deep sleep) from a specific behavior associated with that state (PEs). To identify PEs in our electrophysiological dataset, we again used DeepLabCut to track different body parts of the fly . We further used multiple classifiers based on the tracking data, followed by manual verification to identify the PEs. Sample PEs in an example fly along with a few of the features ( x , y proboscis location, likelihood of location, and distance of proboscis to eye) are shown in . For more details on the proboscis detection steps, refer to the “Proboscis tracking for flies on electrophysiology setup” section. Our classifier accuracy was over 80% for most flies : The ground truth was validation by a human observer on classifier detected events. In , we plot the mean proboscis to eye distance for all the flies averaged across awake and sleep bouts. As described earlier for flies without implanted electrodes, PEs executed during wake and sleep are behaviorally similar and, hence, would be difficult to distinguish from each other using video alone. Similar to our behavioral dataset, PE events usually occur in rhythmic bouts, rather than single events. In , we plot the interproboscis interval period, which is the interval between consecutive PE events in a single proboscis bout. Most proboscis events occur within 1.8 s (95th percentile) of each other. As shown before in our behavioral data without implanted electrodes, the interproboscis interval does not vary across awake and sleep periods. Next in , we decided to probe the number of single (one PE event) and multiple (>1 PE event) across different flies. We found that occurrences of single PE events (both across wake and sleep periods) are significantly lower than multiple PE events using a pairwise t test with t (13) = 3.72, P < 0.01. To further illustrate this point in , we plotted the burst length of a PE event (number of extension events within a PE bout) and found that only 33% of the events are single PE, while the rest are multiple PE events. Overall, our investigation of PEs in this multichannel recording dataset is in concurrence with our first (electrode-free) dataset, suggesting that inserting probe into the fly brain does not alter several measures associated with this microbehavior. Previous work has linked PEs with a deep sleep stage in flies . We therefore next investigated whether the number of PEs varied across a sleep bout in our LFP recording dataset, as suggested in our purely behavioral dataset . We found that more PE events occur after 5 min of a sleep bout, compared to those occurring before the 5th min of sleep [pairwise t test, t (12) = −2.8, P < 0.05], suggesting that PEs indeed predominate during later stages of sleep. We also compared PEs immediately after flies had awakened from sleep, which revealed no significant difference [pairwise t test, t (13) = −1.92, P > 0.05] between PE bouts occurring after the 5th min of an awake bout compared to those occurring before the 5th min of an awake bout, confirming that transitions into sleep (rather than transitions back to wake) were associated with increased PE events. We next asked whether the number of PE events changed across a sleep bout in our multichannel recording preparation. To determine whether the PE event count varies across different temporal sleep stages , we used multilevel models. For details, refer to the “Models for PE event counts” section. The time_label model (where the PE event count depends only on the specific temporal sleep stage) emerged as the winning model. Further, we performed post hoc tests using Tukey adjustment (for multiple comparisons) to identify differences between pairs that are significant. We found that PE events occur more often in midsleep compared to other sleep stages. Returning to our original observation that most PEs occur after 5 min of sleep, we plotted the distribution of PE events occur in the midsleep epoch across all flies and found that 95 percentile of all PE events in midsleep indeed occur after 2.5 min of the midsleep epoch (thus, 4.5 min from sleep onset). We next questioned whether PEs occurring during sleep and wake had similar neural correlates or whether the sleep-related events were indeed different and thus indicative of a unique sleep-related function. We therefore focused on the multichannel data to identify any differences in the LFP activity associated with PEs during wake and sleep epochs. We first identified the PE periods (refer to the “Identification of proboscis periods” section), extracted the LFP data, and epoched them into 1-s bins. Second, we used spectral analysis to determine whether epochs characterized by PEs differ in frequencies across different channels for wake compared to sleep. For this purpose, we computed the spectral power for every 1-s epoch per channel (see the “Power spectrum analysis” section), using as before a common reference system for re-referencing the LFP data. Third, we used nonparametric resampling tools to identify the precise patterns (frequency × channel pairs) differing in proboscis periods within awake and sleep at the group level. For this purpose, we first computed the difference in mean spectral data across nonproboscis periods (awake or sleep) and proboscis periods (awake proboscis and sleep proboscis, respectively) for individual flies. We then performed a cluster permutation test (flies × frequencies × channels) on the difference data to reveal significant ROIs or clusters (frequency × channel pair). In , we show the difference data for awake PE events (awake proboscis–awake period) and clustering analysis, which reveals a significant cluster in the middle channels (channels 6 to 10) across all frequencies. Further, within the significant cluster, we also performed a post hoc analysis, revealing that spectral activity within the awake proboscis periods is lower than awake periods. In , we show the difference data for sleep PE events (sleep proboscis–sleep period), and clustering analysis reveals a significant cluster in the central channels (1 to 5) across higher frequencies (32 to 40 Hz). Further, within the significant cluster, we also performed a post hoc analysis, revealing that spectral activity within the sleep proboscis periods is higher than sleep periods (in contrast to the awake proboscis periods). In , we directly compared the awake and sleep proboscis periods and showed the difference data (awake proboscis–sleep proboscis) and clustering analysis, which reveals a significant cluster in the central and middle channels (channels 1 to 9) across higher frequencies (25 to 40 Hz). Further, within the significant cluster, we also performed a post hoc analysis, revealing that spectral activity within the sleep proboscis periods is lower than awake proboscis periods. This suggests that PEs occurring during sleep are qualitatively different from identical PE events occurring during wake. This suggests that the brain activity state [e.g., quiet or deep sleep ] overrides the neural correlates associated with the same behavior occurring during wake. In this study, we used a combination of multichannel electrophysiology, behavior, and machine learning to identify and characterize spontaneous sleep in Drosophila flies. We describe distinct features associated with sleep stages in wild-type flies . However, we expect that mutant animals could reveal different brain or microbehavior dynamics, especially if sleep functions are impaired. Our multichannel recording preparation should allow LFP activity of mutant strains to be characterized and compared to wild-type controls to provide an additional level of explanation than behavioral activity readouts. We have recently published a detailed protocol for performing multichannel recording experiments, which should make this approach more widely accessible . This approach provides an alternative to optical imaging techniques for assessing whole-brain states. However, for understanding and probing the exact spatial and cellular nature of specific sleep stages identified in this study with higher resolution, innovations in optical imaging during sleep would be required. For example, closed loop techniques could be used to image sleep only during specific stages, such as only during midsleep PE bouts. With the advent of new genetically encoded calcium indicators with faster kinetics and higher sensitivity [such jGCaMP8 ], it should also be possible to combine genetically encoded calcium indicators and LFPs recorded together to provide a complementary readouts to better understand sleep physiology and functions in this model. Similarly, genetically encoded voltage indicators such as ArcLight should reveal whether our findings generalize to other methods of describing electrical activity in the sleeping fly brain. Sleep is most likely a whole-brain phenomenon, meaning that its presumed varied functions are understood to be of benefit to the entire brain rather than to only specific subcircuits. There is good evidence for this in the Drosophila model, with synaptic physiology for example changing during sleep in the optic lobes of flies and brain-wide . Similarly, in mammals, subcortical and cortical brain regions experience sleep-related changes that are thought to be important for maintaining neuronal homeostasis . Accordingly, to better understand sleep in an animal model such as D. melanogaster requires sampling associated changes not only in neural activity across the fly brain but also in specific subcircuits of interest. Unlike in larger animal models such as rodents, recording from multiple brain regions in behaving (and sleeping) flies has been challenging, so there has been limited capacity to investigate dynamic brain processes during sleep in this otherwise powerful model system. While genetically encoded reporters of neural activity (e.g., calcium indicators such as GCaMP) have been successfully used to describe spontaneous sleep in flies , these are typically still limited to a narrow ROI (e.g., the mushroom bodies or the central complex), and imaging conditions are rarely commensurate with the typical day-night cycles of normal sleep. In this study, we overcame these drawbacks by recording electrical activity from 16 channels across the fly brain, in behaving flies across long-lasting recordings that spanned a typical day and night. Our multichannel recording preparation therefore approximates as closely as possible—in flies—a sleep EEG, which has been the starting point for most discussions on sleep physiology in other animals. The human sleep EEG has defined the sleep stages that are now being investigated in other animals , although this is obviously a neocortical view with potentially little relevance to animals lacking the neural architecture giving rise to sleep signatures such as delta (1 to 4 Hz) during slow-wave sleep or theta (5 to 8 Hz) during REM sleep . Rather than focusing on specific frequency bands such as delta and theta, we conducted an agnostic analysis of our multichannel LFP data using machine learning techniques. These unbiased classifiers identified distinct stages of sleep, in flies that were otherwise entirely quiescent (apart from certain microbehaviors, which we discuss further below). These identified sleep stages align closely with similar changes in brain activity dynamics observed in calcium imaging data in spontaneously sleeping flies . For example, in the calcium imaging data, we showed that even before sleep onset, the number of “active” neurons is already different (lower) than wake; accordingly, in the current electrophysiological data, the classifiers predict sleep onset 2 min before flies stop moving. This also aligns with an older (single channel) electrophysiological sleep study in flies showing that brain LFP activity becomes uncorrelated from behavior 5 min before sleep onset . Together, these findings make a compelling case for dissociative states in the fly brain, which is consistent with the view that these states might also be changing within a sleep bout. Our multichannel recordings also revealed that changes in sleep physiology are likely to encompass the entire fly brain, from the optic lobes to the central complex. This is consistent with a recent study where we found that experimentally induced “quiet” and active sleep engaged different whole-brain transcriptional programs . That the whole insect brain “sleeps” is also consistent with other studies, although this has not been previously demonstrated using a comprehensive multichannel approach. An early study in honeybees showed that visually responsive neurons in the optic lobes become unresponsive during sleep and that these cells become rapidly responsive again when bees are woken up with an air puff. Immunochemical studies investigating synaptic proteins found that these were down-regulated in the optic lobes during sleep and in the whole brain . It is understood that the insect optic lobes receive significant feedback from the central brain and from the contralateral lobes , and it has been shown that oscillatory neural activity extends throughout the fly brain , so our finding that the optic lobes also sleep is expected. Recent work using a similar multichannel recording preparation found that isoflurane anesthesia affected feedback from the central brain to the optic lobes , suggesting that this efferent communication is a feature of the waking fly brain. However, sleep in the central fly brain is different from sleep in the periphery. Only central channels were predictive of sleep onset, and only the central channels revealed the 5- to 10-Hz frequency features that we have previously identified in single-channel recordings . Although this hints at a sleep-regulatory role for the central complex, aligning with previous studies , it is important to perform causal experiments involving central complex neurons to clarify the same. Sleep in Drosophila was originally defined by inactivity criteria based on locomotion-based readouts . Subsequent studies using video monitoring and probing arousal thresholds confirmed these simple readouts to be accurate estimates of sleep in flies , but these behavioral studies also showed that flies slept in distinct stages. Only recently has closer video monitoring of fly microbehaviors revealed that these animals are not entirely immobile during sleep , although some microbehaviors were already anecdotally observed in the first reports of fly sleep, such as changes in posture . Other insects, such as honeybees, display characteristic microbehaviors during sleep, such as changes in posture and antennal movements . In our study, we also found evidence of altered antennal movements during fly sleep, alongside the previously reported PEs . These microbehaviors are not necessarily correlated, although they do seem to be increased during mid-sleep epochs. PEs have been associated with a deep sleep function (waste clearance) in a previous study , so their occurrence in rhythmic spells during mid-sleep is consistent with that interpretation. PEs during wake and sleep are electrophysiologically different, although they are behaviorally identical. We found that the neural signatures of PEs occurring during wake are concentrated in the middle channels and spread across a broad frequency range (5 to 40 Hz). Note that these middle channels could coincide with the location of neuropils of the antennal mechanosensory and motor center. Several studies have implicated the antennal mechanosensory and motor center as the location of axons of gustatory projection neurons and, thus, an immediate higher-order processing center for taste. Another study has also shown that persistent depolarization of motor command activity of the Fdg (feeding) neurons could also result in PEs. In this context, note that LFP activity during PE events in the awake periods is higher than those in the awake periods without PE events, suggesting a distinct PE signature. However, this is not the case for the exact same behaviors during sleep. We found that LFP activity for PEs occurring during sleep bouts is concentrated instead in the central channels and primarily engages the higher frequencies (32 to 40 Hz). This suggests a distinct control mechanism for PEs occurring during sleep versus wake. There are obviously several drawbacks to studying sleep physiology in a tethered animal that has been skewered by a recording electrode. Sleep cannot be quite normal in such a preparation. For example, it is possible that the damage caused by the electrode evokes an increased need for repair and consequently waste clearance , thus increased PE behavior. However, this would also be the case for windows in the brain created for calcium imaging (and in the latter scenario, the proboscis is typically glued in place to prevent brain motion artifacts), so no fly brain recording preparation (yet) can realistically sidestep these concerns. Nevertheless, it is evident that even in this somewhat contrived context, flies do still sleep and their behavior is comparable to tethered preparations without electrodes inserted. In addition, by restricting our analyses to only 8 hours of recording per fly, we ensured that all of our sleep data were generated from healthy animals that were still active when awake. We did not conduct arousal threshold experiments in this study to determine sleep quality, as these experiments inevitably alter sleep architecture, and our main goal was to examine spontaneous sleep in this preparation. Future studies using this paradigm will show how brain responsiveness to stimuli changes across sleep and wake. In our study, we contrasted our spontaneous sleep analyses with an induced sleep dataset, using a 104y-Gal4/UAS-TrpA1 line [collected as a part of the study ]. While it is understood that this sleep-promoting line expresses broadly, beyond just the fan-shaped body alone , it nevertheless still renders flies quiescent and achieves sleep functions . Still, sleep-like effects associated with activating this circuit should be interpreted with caution, as it is clear from our data here that although the central brain is activated during this state, it could be the result of neurons other than in the fan-shaped body being targeted as well . So, whether the observed changes in LFPs in these flies are related to the behavioral sleep effects remains unclear. We do not interpret the increased LFP activity as a seizure-like effect as in but rather as a form of active sleep also seen during optogenetic or thermogenetic activation of other sleep-promoting lines . One important observation from our multichannel study, however, is that we never saw a level of increased LFP activity during spontaneous sleep as observed during the artificial conditions imposed by 104y-Gal4/UAS-TrpA1 activation. Our multichannel data add to the growing realization that the entire insect brain engages in dynamical patterns of activity during both sleep and wake and does not simply shut off when insects become immobile or quiescent. To understand these patterns of activity and how they might relate to conserved sleep functions requires agnostic approaches derived from (for example) machine learning, as done in this study, rather than approximations inspired from human EEG. Animals Flies ( D. melanogaster ) were reared on a standard fly medium under a 12-hour light/dark cycle (lights on at 8 a.m.). Flies were raised on a 25°C incubator (Tritech Research Inc.) with 50 to 60% humidity, and fewer than five flies were maintained per vial to ensure optimal nutrition and growth. Adult female flies (<3 days after eclosion) of wild-type Canton-S were used for the electrophysiological recordings. The choice of age of flies was based on pilot data that suggested a higher survival rate of younger flies over a 12-hour period on the air supported ball setup (after electrode insertion). Flies used for the behavioral dataset were between 3 and 7 days after eclosion. For thermogenetic experiments, refer to for further details. No ethics committee approval was needed for all the studies. Fly tethering First, flies were anesthetized on a thermoelectric cooled block maintained at a temperature of 1° to 2°C. Second, the thorax, dorsal surface, and wings of the fly were glued to a tungsten rod using dental cement (Coltene Whaledent SYNERGY D6 Flow A3.5/B3) and cured using high-intensity blue light (Radii Plus, Henry Schein Dental) for about 30 to 40 s. Further, dental cement was also applied to the necks to stabilize them and prevent lateral movement of the head during electrode insertion (see next section). Third, to prepare the fly for the multichannel overnight recording, we placed a sharpened fine wire made of platinum into the thorax (0.25 mm; A-M Systems). The platinum rod serves as a reference electrode and helps filter the noise originating from nonbrain sources. The insertion of a platinum electrode (while providing minimal discomfort to movement of animal) was done using a custom holder with a micromanipulator to enable targeted depth of insertion. For flies in the behavioral dataset, the procedure was the same, except that no reference wire was inserted. Multichannel preparation First, the tethered fly from the previous step was placed on an air supported ball (polystyrene) that served as a platform for walking/resting. Humidified air was delivered to the fly using a tube below the ball (also from the side) to prevent desiccation. Second, to record from half of the regions in the fly brain (half-brain probe) we used a 16-electrode linear silicon probe (model no. A1x16-3 mm 25-177, NeuroNexus Technologies). Third, the probe was inserted into the eye of the fly laterally using a micromanipulator (Merzhauser, Wetzlar, Germany). The probe was inserted such that the electrode sites faced the posterior side of the brain. The final electrode position (depth of insertion) was determined using the polarity reversal procedure described below. For flies recorded in the behavioral dataset, the setup was similar, except that a custom chamber was lowered over the ball and fly to maintain a humidified environment during recordings. Polarity reversal Variability in spatial location of recording sites across different flies is a primary impediment when comparing data across different flies. This occurs mainly because of the angle and depth of insertion of the probe, both of which cannot be precisely controlled. To overcome this issue and to obtain comparable recording sites across flies, we designed a paradigm using visual evoked potentials (fig. S2). First, while the probe was being inserted from the periphery to the center of the brain, we used visual stimuli (square wave of 3 s in duration with 1-Hz frequency) from a blue light-emitting diode (LED). When the visual stimuli were displayed, we simultaneously recorded the LFPs from the 16 electrode sites. During the initial stage of insertion, most of the electrodes are outside of the brain, and only a few are inside the eye, optic lobe. The recordings in the electrodes inside the eye and the brain show a visual evoked potential corresponding to the leading edge and the trailing edge of the square wave. Second, we move the probe slowly toward the center of the brain so more of the electrode sites would now be inside the brain. Third, we notice that some electrodes have a negative deflection and some have a positive deflection with respect to the leading edge of the square wave. The electrodes in the eye, optic lobe regions, display a positive deflection, and electrodes further to the central parts of the brain display a negative deflection. However, this polarity change usually happens in the electrodes that are coincident on the regions right after the medulla. Fourth, for all flies, we made sure that the polarity change coincided with the electrodes 11 to 13 to establish consistency in terms of the spatial locations. Dye-based localization To identify the possible locations in the brain targeted by the electrodes, we used a three-step procedure. In the first stage, we used immunohistochemistry to identify the locations of electrodes using a fluorescent dye and neuropils using antibodies against nc82 (presynaptic marker bruchpilot), respectively. In the second stage, we used a registration procedure to map the dye locations to an electron microscopy dataset (using nc82 images). In the third stage, we used principal components analysis to identify the precise neuropils targeted. Immunohistochemistry First, we labeled the probe with Texas red fluorescent dye conjugated to 10,000-Da molecular mass dextran dissolved in distilled water (Invitrogen) to identify the recording locations. Second, after removing the flies from the tether, the brains were dissected in 1× ice-cold phosphate-buffered saline (PBS) and fixed in 4% paraformaldehyde diluted in PBS-T (1× PBS and 0.2% Triton X-100) for 20 min in the dark to preserve the fluorescence of the dye. Third, after fixation, tissues were washed three times with PBS-T [with 0.01% sodium azide (Sigma-Aldrich)] and blocked for 1 hour in 10% goat serum (Sigma-Aldrich). Fourth, the brains were then incubated overnight in a primary antibody solution (mouse anti-nc82, Developmental Studies Hybridoma Bank; 1:20). Fifth, on the next day, brains were washed three times with PBS-T (10 min per wash) and incubated overnight in a secondary anti-body solution (1:250; goat anti-mouse Alexa Fluor 647). Last, the brain was washed in PBS-T and embedded in VECTASHIELD and imaged using a confocal microscope (Zeiss). Image registration First, for each fly, we used the nc82 image as source space to align to the JFRC2 template space [which is a spatially calibrated version of JFRC from FlyLight]. The registration process involved two steps: (i) rigid affine registration that roughly aligned the source image to the template space with 12 degrees of freedom (translation, rotation, and scaling); and (ii) nonrigid registration that allowed different brain regions to move independently with a smoothness penalty. The entire process was carried out using the CMTK plugin (FiJi toolbox) as described here . Second, we then used the JFRC2 (light-level) registration as bridging registration to FAFB14 (electron microscopy dataset) using the natverse toolbox and mapped both the nc82 images and the dye locations to the FAFB14 space. Electrode localization The electrode dye locations inside the brain are usually visible as fragments (points) instead of a single continuous (line) segment, mainly because the insertion of the probe causes the smearing of the dye on the neuropils in the brain. To identify the precise locations of the recording electrodes in the brain, we first used the points and performed principal component analysis to find the eigenvector or line (first principal component) that would have minimize the distance between the different points to the line itself. This line could be thought of as the main path of the probe as it entered into the brain. Next, we choose the innermost electrode as the projection of the innermost point (dye location) projected onto the eigenvector. The rest of the recording electrode sites were obtained by sampling the same eigenvector at intervals of 25 μm (which is the interelectrode distance on the probe) from the innermost point. LFP recording The LFP data from the 16-electrode probe were acquired using Tucker-Davis Technologies (Tucker-Davis Technologies, USA) multichannel data acquisition system at 25 kHz coupled with a RZ5 Bioamp processor and RP2.1 enhanced real-time processor. Data were acquired and amplified using a preamplifier (RA16PA/RA4PA Medusa PreAmp). The preamplifier used can only record data of up to 20 hours on a single charge cycle; hence, we limited the recording of the LFP signals to 20-hour duration. Further, as file sizes tend to be larger over longer recording periods, we recorded data in chunks of 1 hour, which was automatically controlled via a MATLAB script. Video recording for flies on electrophysiology setup The ball setup was illuminated with visible light, switched ON at 8 a.m. and switched OFF at 8 p.m. (mimicking the light/dark cycle conditions in the incubator). Further, we used infrared LEDs for monitoring the movement of the fly on the ball (which allowed us to quantify movements under both the light and the dark cycles. We recorded the fly in profile view with a digital camera from Scopetek (DCM 130E), and to achieve optical magnification, we used a zoom lens (from Navitar). As done previously , we removed the infrared filter in front of the camera sensor, to allow for filming under infrared light, thereby achieving constant illumination under both day and night. We made a custom script with Python (2.7.15) and OpenCV (3.4.2.17) that allowed for recording videos automatically and saving them in hourly intervals. The video was recorded with a resolution of 640 × 480 pixels at 30 frames/s using Xvid codec and further with additional metadata (time stamps in a csv file) that allowed a later matching up of the LFP data with the video data. Video recording for flies on behavioral dataset setup The camera in this setup was a Point Grey/Teledyne FLIR Firefly perpendicular to the fly, in addition to an extra camera (Pro-MicroScan) placed on the trinocular output of a Nikon SZ7 stereomicroscope. This second camera was used to record a close-up view of the head of the fly for the purposes of tracking movements of the antennae. Illumination was as above with infrared LEDs, and recordings were obtained with the same Python scripts. Movement analysis The fly movement was quantified with the video files using Python (3.6.1) and OpenCV (3.4.9) in the following manner. First, every video file (1 per hour of recording) was read frame by frame. Second, for each frame, we clipped the image such that the main focus was on the fly while ignoring items in the background. Third, we converted the color space for each frame from BGR to grayscale. Fourth, we computed the “deltaframe” as the absolute difference of the current frame with the previous frame. Fifth, we thresholded the deltaframe using a custom defined threshold per fly and converted them into binary. Sixth, we dilated the thresholded image and identified contours in the dilated image and looped over the different contours selecting those above a specific threshold (area). Last, we drew rectangles around the contours above the threshold on the original (color) image to manually verify the movement location. Only those frames that had contours above threshold were regarded as “moved” frames, and other frames would be classified as “still.” Thus, each frame would be either 0 (still) or 1 (moved). In the next stage, we used the frame by frame movement data to identify segments of LFP data as sleep or awake in the following fashion. First, we synced the LFP data with the video data using the time stamps in both the LFP data and video metadata (csv files). Second, we clipped both the LFP and video data to the first 8 hours of recording. Though 23 flies survived for more than 12 hours, we only used the first 8 hours to ensure that the fly’s health was completely optimal (considering the circumstances) in both the behavior and brain recordings. Further, only 16 flies were used for the analysis, as 7 of them had issues with calibration (noisy or no calibration) or abnormal activity (either no sleep trials or very active). Third, we pruned movement data to ensure that brief noise in movements is avoided. Fourth, we identified the segments of data wherein the fly was immobile for more than 5 min as sleep and the segment immediately preceding 2 min before the sleep data as presleep and the rest of the data as awake. Crepuscular analysis To identify whether the fly activity in our recordings followed a crepuscular pattern, first, we computed the movement pattern as proportion of frames moved per minute within these periods. Second, we divided the movement patterns across six different periods: (i) predawn: 5 to 7 a.m., (ii) dawn: 7 to 9 a.m., (iii) postdawn: 9 to 11 a.m., (iv) predusk: 5 to 7 p.m., (v) dusk: 7 to 9 p.m., and (vi) postdusk: 9 to 11 p.m. Third, we computed the z score of the movement pattern for normalization purposes, thus ending up with movement pattern per minute of the above mentioned time periods per fly. LFP analysis Preprocessing LFP data were analyzed with custom-made scripts in MATLAB (MathWorks) using EEGLAB toolbox . The preprocessing steps were as follows: First, the binary data were extracted for every hour from Tucker-Davis Technologies “tank” file format to MATLAB “mat” file format. Second, the data were resampled to 250 Hz and bandpass-filtered with zero phase shift between 0.5 and 40 Hz using hamming windowed-sinc FIR filter, and further line noise at 50 Hz was removed using a notch filter. Third, the hourly LFP data were saved to EEGLAB “.set” file format. Fourth, the hourly LFP data were interpolated in a linear way to avoid any discontinuities between the hourly segments of data. Fifth, the movement data (see the “Movement analysis” section) were added to the EEGLAB file along with the start and end time based on video data. Sixth, the multihour LFP data (along with the movement data) were collated for the first 8 hours of the recording. Seventh, we created separate epochs based on movement data into sleep, presleep, and awake [where preceding 2 min of immobility (−2 to 0 min) is presleep, immobility is sleep, and the rest of the data is awake, here, 0 min is the start of the immobility]. Eighth, the epochs were now re-referenced on the basis of the channel where the polarity reversal occurred. For this, we identified the channel wherein the polarity reversal occurred (see the “Polarity reversal” section) and subtracted all the channels from this channel, thus resulting in 15 channels after the re-referencing. This brain-based referencing technique (similar to the Cz-based reference in human EEG recordings) allows for filtering of nonbrain-based physiological noise components (such as heartbeat, etc.). Previous multichannel recordings used only the thorax-based referencing (followed by bipolar referencing) along with independent component analysis to remove physiological noises. However, the identification of noise components such as heartbeat, etc. from independent component analysis is subjective and further depends on the expertise of the human curator. Our technique overcomes these issues while simultaneously providing a method to remove physiological noises not originating from the brain. Power spectrum analysis (sleep versus wake) The power spectra of the LFP data were computed for each fly in the following fashion. First, each condition (“wake” and sleep) of varying duration was reepoched into trials of 60-s duration. Second, each trial was bandpass-filtered with zero phase shift between 5 and 40 Hz using hamming windowed-sinc FIR filter. Third, for each trial, power spectra (in decibels) were computed using the “spectopo” function in the EEGLAB toolbox in MATLAB. Fourth, the mean power spectra for all the trials per condition per fly were computed. The goal of the power spectra analysis was to identify the cluster of frequency bands and channels that differ across the sleep and wake periods at the group level. To perform these group level comparisons (sleep versus wake periods), we only used flies that had at least 10 trials under each condition. To identify the differences across wake and sleep periods, we used cluster-based permutation tests. Cluster-based permutation tests are a nonparametric way of testing difference across conditions in an N -dimensional space (here, frequencies × channels) while still allowing for the multiple comparison problems to be solved without reducing the statistical power of the test. The outcome of such a test would be significant cluster(s), which, in our case, would be an ROI across frequencies × channels. Thus, we performed a cluster permutation test (flies × frequencies × channels) using MNE (0.22.0) in Python (permutation_cluster_1samp_test) with all possible permutations to identify clusters (ROIs in frequencies × channel space) that differ across awake and sleep periods. We also computed the effect sizes for every channel × frequency combination using Cohen’s d measure (difference of means/SD). Thermogenetic sleep induction The thermogenetic sleep induction data were collected using 104y-Gal4/UAS-TrpA1 lines as part of the study . This multichannel recording consisted of a 16-electrode full-brain probe (model no. A1x16-3 mm50-177, NeuroNexus Technologies) covering the whole of the brain (fig. S7B) (in contrast to the half-brain probe mentioned before) with an interelectrode distance of 50 μm. The rest of the recording parameters were the same as mentioned in the previous section. Sleep induction was achieved by transient activation of this circuit, as described in . Preprocessing LFP data were analyzed with custom-made scripts in MATLAB (MathWorks) using EEGLAB as mentioned before. The preprocessing steps were as follows: First, the LFP data per condition (“baseline,” “sleep induction,” and “recovery”) were converted to EEGLAB .set file format with a sampling rate of 1 kHz. Second, the LFP data were re-referenced using a differential approach, wherein nearby channels are subtracted with each other resulting in 15 channels. Power spectrum analysis (baseline versus sleep induction) The power spectra of the LFP data were computed for each fly in the following fashion. First, each condition (baseline and sleep induction) was reepoched into trials of 1-s duration. Second, each trial was bandpass-filtered with zero phase shift between 5 and 40 Hz using hamming windowed-sinc FIR filter. Third, for each trial, power spectra (in decibels) were computed using the spectopo function in the EEGLAB toolbox in MATLAB. Fourth, the mean power spectra for all the trials per condition per fly were computed. The group level comparison was performed using cluster permutation test methods (as described in previous sections) to identify differences in frequency × channels across baseline and sleep induction conditions. Sleep staging by classifiers The main goal of this analysis was to use classifiers to identify the existence of sleep stages using LFP data. Labeling of sleep states Here, we relabeled the segments of data (already identified as sleep and awake based on movement data) in the following fashion. First, we labeled the segments of data in the first 2 min (0 to 2 min) after the start of immobility as earlysleep and the segments of the data in the preceding 2 min (−2 to 0 min) as presleep. Second, we labeled the segments of data in the last 2 min of sleep as latesleep and the segments of data in between the earlysleep and latesleep as midsleep. The rest of the data are considered as awake. Preprocessing and power spectrum computation The preprocessing steps were the same as mentioned in the previous section (LFP preprocessing). For the computation of the power spectrum, we followed similar procedures as mentioned before; however, we saved the individual power spectrum per trial (channels × frequency) per fly in a csv file along with the corresponding label of the sleep state. Classifier probability analysis We implemented an SVM-based classifier using scikit-learn (0.24.2) to classify the LFP data using the following steps. First, we collated the features based on power spectrum (channels × frequency) from all the flies across different sleep states. Second, we filtered the features to only awake (5106 epochs) and midsleep (1165 epochs) states. Here, we also did not feed (for training) the preceding 2 min of presleep, succeeding 2 min of earlysleep, and the last 2 min of sleep latesleep into the classifier (we used those minutes for sanity check purposes; refer to ). Third, we encoded the target labels (awake and midsleep) into binary states using “LabelEncoder” from scikit-learn. Fourth, we balanced the composition of labels (or classes) to prevent bias due to unequal distribution of classes in the training dataset. Fifth, we divided the dataset into train and test sets (80% train and 20% test) using “train_test_split” from scikit-learn in a stratified fashion. Sixth, we subjected both the train and test data to a standard scaler using “StandardScaler” from scikit-learn, which removes the mean of the data and scales it by the variance. Seventh, we implemented an SVM-based classifier using a “linear” kernel along with probability estimates per class and fit the classifier to the train dataset. Eighth, we used the trained classifier on the test dataset and computed different metrics of classifier performance such as accuracy, roc_auc, recall, precision, and F1 score using “metrics” from scikit-learn (fig. S8B). Ninth, we used the trained classifier on all class labels (awake, presleep, earlysleep, midsleep, latesleep, preceding 2 min of presleep, and succeeding 2 min of latesleep) from the original dataset and computed the probability estimates per class. Note that none of the presleep, earlysleep, latesleep, preceding 2 min of presleep, and succeeding 2 min of latesleep data have not been seen by the classifier beforehand. The above process from step 5 onward is repeated a further four times with different test and train splits to create five different iterations of classifiers and performance metrics. Multiclass random forest classifier analysis and feature importance To identify differences across multiple classes (awake, presleep, earlysleep, midsleep, and latesleep), we implemented a random forest classifier using scikit-learn (0.24.2) to classify the LFP data using the following steps. First, we collated the features based on power spectrum (channels × frequency) from all the flies across different sleep states. Second, as the different labels (or classes) were unbalanced for: awake (5585 epochs), presleep (258 epochs), earlysleep (262 epochs), midsleep (1165 epochs), and latesleep (262 epochs), we used SMOTE (synthetic minority oversampling technique) from imblearn (0.8.1) to balance the distribution of classes in the dataset. Third, we divided the dataset into train and test sets (80% train and 20% test) using train_test_split from scikit-learn in a stratified fashion. Fourth, we subjected both the train and test data to a standard scaler using StandardScaler from scikit-learn, as mentioned in the previous section. Fifth, we encoded the target labels into binary states using “LabelBinarizer” from scikit-learn. Sixth, we implemented a random forest classifier for this multiclass classification problem. As the random forest classifier has multiple hyperparameters that need to be tuned, we first used a random grid (using “RandomizedSearchCV” from scikit-learn) to search for the hyperparameters and then further used these parameters in a grid search model (using “GridSearchCV” from scikit-learn) to identify the best hyperparameters. Seventh, we used the trained classifier on the test dataset and computed different metrics of classifier performance such as recall, precision, F1 score, etc. using metrics from scikit-learn separately for all the five classes. Furthermore, we also computed a normalized confusion matrix using “confusion_matrix” from scikit-learn. The above process from step 5 onward is repeated a further four times with different test and train splits to create five different iterations of classifiers and performance metrics. Last, to identify and rank the importance of different features, we used the permutation importance metric (using “permutation_importance” from scikit-learn). The permutation feature importance works by randomly shuffling a single feature value and further identifying the decrease in the model score . The process breaks the relationship between the shuffled feature and the target; thus, if the feature is very important, it would be indicated by a high drop in model score; on the other hand, if it is relatively unimportant, then the model score would not be affected so much. We used the permutation importance with a repeat of 5, and for each train/test split, we computed a permutation importance score. Last, the mean permutation importance score was computed using all the splits. The procedure for differentiating across daysleep and nightsleep periods was the same except the target classification was across daysleep (917 epochs) and nightsleep (770 epochs) classes. Classifier metrics The performance of the abovementioned classifiers (both SVM-based and random forest–based) was evaluated using metrics such as accuracy, recall, precision, roc_auc, and F1 score. The definitions of these metrics are as follows: 1) Recall: This refers to the ability of a classifier to correctly detect the true class of the epoch among the classifications made. It is obtained by the (TP/TP + FN). It is also known as sensitivity. TP indicates true positives, and FN indicates false negatives. 2) Precision: This refers to the exactness of the classifier. It is obtained by the (TP/TP + FP). 3) F1 score: This refers to the harmonic mean between precision and recall. 4) roc_auc: This refers to the area under the receiver operating curve. In general, it refers to how efficient the classifier is in identifying different epochs. Scores closer to 1 indicate a highly efficient classifier, whereas those closer to 0 indicate otherwise. 5) Accuracy: This is defined as the number of correctly classified epochs divided by the overall number of epochs classified. 6) Confusion matrix: This enables visualization of the classifier performance, by tabulating the predicted classes against actual classes. For multiclass problems (random forest classifiers here), the values in the diagonal indicate where the predicted and actual classes converge, whereas those on the off-diagonal indicate misclassifications. Proboscis tracking for flies on electrophysiology setup Pose detection We used DeepLabCut to track the different body parts of the fly using an artificial neural network trained in the following fashion. First, we extracted frames from sample videos, wherein the fly performs the following: normal walking movement on the ball (“all_body”) and PE periods (“proboscis”), both while asleep and awake. For each fly, we extracted videos of the abovementioned categories for the purpose of creating annotation labels. Second, we extracted frames from these videos and further labeled the different body parts: eye, proboscis, leg1_tip, leg1_joint, leg3_tip, leg3_joint, and abdomen . Third, we trained the neural network per fly using this dataset with “resnet_50” weights until the loss parameter during training stabilizes. The performance of the network per fly (train and test error in pixels) was in general similar in both the train and test datasets. Fourth, we evaluated the annotation performance manually by labeling a test video and verifying the same. Last, this trained network (per fly) was used for annotating the video for the first 9 hours of the recording. Pose analysis In the next step, we use the pose detection output to design a classifier capable of identifying PE periods. First, we manually detected several sample time points (to be used as ground truth for training/testing the classifier) in the video of each fly, identified proboscis time periods, and saved them in a “csv” file. Second, we used the pose tracking data ( x , y likelihood) for the body parts of the proboscis, leg1_tip, leg1_joint, eye, and abdomen and further computed low pass–filtered data (0.1-Hz butterworth filter) of each body part. Further, we also computed the moving average (window length of five samples) of the filtered data. Third, we computed “dist_eyeprob” as the Euclidean distance between the proboscis and eye body part and lastly multiplied the same with the likelihood of the proboscis body part. Fourth, we used the abovementioned body parts (and its derivatives) as features and used the StandardScaler from scikit-learn for normalizing the data. Fifth, we divided the dataset into train and test sets (70% train and 30% test) using train_test_split from scikit-learn. Sixth, we implemented an SVM-based classifier using an “rbf” kernel and fit the classifier to the train dataset. Seventh, we used the trained classifier on the test dataset and computed different metrics of classifier performance such as accuracy, recall, precision, etc. using metrics from scikit-learn. The data segments (frames) identified here will be used to construct the candidate proboscis periods, which then will be further refined in the next steps. Proboscis detection First, we use the frames identified by the classifier from the previous section and construct continuous segments to identify time periods of probable proboscis periods. Further, we add additional time periods using the likelihood of the proboscis part with a threshold-based method. Second, we identify the peak frame (where the maximum displacement of the proboscis occurs) in each PE event (each proboscis bout consists of multiple PE events) and save the identified proboscis events (frame number, time, and behavior state) to a csv file. Third, each event in the csv file is manually verified, and only true events are further taken forward. This process is repeated for all the flies, and the proboscis detection accuracy per fly is plotted in . Microbehavior tracking for flies on behavioral dataset setup Here, the same method for tracking microbehaviors via DeepLabCut was used, focusing on the proboscis and abdomen for the lateral camera view (see above) and the base and tip of the left and right antennae for the dorsal view of the fly head. The data from these two streams were imported into a custom MATLAB (2020a) script, which performed synchronization based on the integrated time stamps. After preprocessing, antennal tracking with DeepLabCut was converted into an angle for both respective antennae by calculation of the respective positions of the bases and tips, with the angle of the fly’s head with respect to the camera automatically derived from these data and used to correct the angle of the antennae. For the proboscis, a median position was calculated for each recording—assumed to be the resting position—and the distance and angle between the proboscis at any given time point, and this median position was calculated. Extensions of the proboscis were derived from these distance data with the “findpeaks” function in MATLAB, with a number of exclusion criteria applied to remove tracking artifacts. For example, detected peaks that exceeded a biologically plausible distance threshold, lasted only for a single frame, or had an implausible instantaneous rise time were excluded. Since this method could potentially be biased toward identifying proboscis activity that follows a prototypical shape, we also used an alternative proboscis event detection based purely on the current distance of the proboscis from resting. In this, we used a manually set threshold for each fly to detect portions in the recording when the proboscis was extended versus not, and for these “events,” we calculated the duration and median angle of the proboscis during the span of the event. Periods of antennal periodicity in recordings were calculated on the basis of a fast Fourier transform and applied to time segments of recordings. Since proboscis activity was not sinusoidal in nature (and thus would behave poorly if subjected to a fast Fourier transform), periodicity for this organ was calculated manually as a factor of timing between individual PEs in that PEs were periodic if they occurred less than 6 s after a preceding PE. This value was selected from observation of typical inter-PE intervals. LFP analysis—Proboscis The main goal of this analysis was to identify the spectral signatures associated with the PE periods across awake and sleep states in the LFP data. Identification of proboscis periods First, we used the csv file containing frame by frame detection of manually verified proboscis events (from the section above). Second, we identify periods of PEs which are close together (within 10 s of each other) and label them as continuous periods. Third, we add activity labels such as awake (awake periods without any proboscis activity), “awakeprob” (awake periods with proboscis activity), sleep (sleep periods without any proboscis activity), “sleepprob” (sleep periods with proboscis activity), presleep (presleep periods without any proboscis activity), and “presleepprob” (presleep periods with proboscis activity) based on annotated behaviors. Fourth, we extract the LFP data corresponding to the different time periods across each fly. Power spectrum analysis The preprocessing steps for the extracted LFP data were the same as mentioned in the previous section (LFP preprocessing). For the computation of the power spectrum, we followed similar procedures as mentioned before; however, we computed the individual power spectrum per trial (channels × frequency) per fly by reepoching them into trials of 1 s in duration (instead of the 60-s periods for sleep analysis, as the proboscis periods are usually shorter). Then, the mean power spectrum for all the trials per condition per fly was computed. Next, we performed cluster permutation tests (flies × frequencies × channels) for identifying the differences across frequencies and channels across different conditions. For this analysis we only used flies that had at least 50 trials under each condition. Multilevel models Models for antennal and proboscis periodicity We defined two different multilevel models (tables S1, S3, and S5, left and right antenna and proboscis) to understand how the likelihood of periodicity varies by sleep epoch. In the null model, the periodicity depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (epoch model), the periodicity depends only on the epoch (fixed effect) and the fly ID (random effect). These models were fit using the “lmer” function (“lmerTest” package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last the winning model was analyzed using the “anova” function (tables S2, S4, and S6, left and right antenna and proboscis) in R . Models for movement pattern across crepuscular periods We defined two different multilevel models separately for dawn and dusk periods (tables S7 and S9, movement pattern in dawn and dusk periods) to understand how the movement pattern of the flies varies by different twilight hours. In the null model, the movement depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (crepuscular-type model), the movement depends only on the crepuscular-type (fixed effect) and the fly ID (random effect). These models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the winning model was analyzed using the anova function (tables S8 and S10, movement pattern in dawn and dusk periods) in R . Models for movement pattern across recorded hours We defined two different multilevel models (table S11, movement pattern across recorded hours) to understand how the movement of the flies (thereby health) varies by different recording hours. In the null model, the movement depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (recorded hour model), the movement depends only on the recorded hour (fixed effect) and the fly ID (random effect). These models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the winning model was analyzed using the anova function (table S12, movement pattern across recorded hours) in R . Models for LFP power spectrum across recorded hours We defined two different multilevel models separately for awake and sleep periods (tables S13 and S15, LFP power spectrum in awake and sleep periods across recorded hours) to understand how the different recording hours (thereby consistency of recordings) affected the LFP power spectrum. In the null model, the LFP power spectrum depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (recorded hour model), the LFP power spectrum depends only on the recorded hour (fixed effect) and the fly ID (random effect). These models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the winning model was analyzed using the anova function (table S14, LFP power spectrum in awake periods across recorded hours; for the sleep periods, the winning model was the null model, so was not analyzed further) in R . Models for spectral analysis We defined four different multilevel models (table S16) to understand the modulation of the power spectrum by sleep epoch and channel type. In the null model, the power spectrum depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (epoch model), the power spectrum depends only on the LFP epoch type (fixed effect) and the fly ID (random effect). In the third model (channel model), the power spectrum depends only on the channel type (fixed effect) and the fly ID (random effect). In the fourth model (epoch channel model), the power spectrum depends on a combination of the LFP epoch type and the channel type, both used as fixed effects and the fly ID (random effect). These four models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the top two winning models were compared against each other using anova function in R , to validate whether the winning model (if it is more complex) is actually better than the losing model (if it is simpler). The epoch channel model emerged as the winning model, indicating an important contribution from different channels. The epoch channel was further analyzed with the anova function (table S17) in R . Models for PE event counts We defined two different multilevel models (table S18) to understand the modulation of PE event count by sleep epochs. In the null model, the PE event count depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (time_label model), the PE event count depends only on the specific temporal sleep stage (fixed effect) and the fly ID (random effect). These two models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Thus, the time_label model emerged as the winning model. The time_label model was further analyzed with the anova function (table S19) in R . Flies ( D. melanogaster ) were reared on a standard fly medium under a 12-hour light/dark cycle (lights on at 8 a.m.). Flies were raised on a 25°C incubator (Tritech Research Inc.) with 50 to 60% humidity, and fewer than five flies were maintained per vial to ensure optimal nutrition and growth. Adult female flies (<3 days after eclosion) of wild-type Canton-S were used for the electrophysiological recordings. The choice of age of flies was based on pilot data that suggested a higher survival rate of younger flies over a 12-hour period on the air supported ball setup (after electrode insertion). Flies used for the behavioral dataset were between 3 and 7 days after eclosion. For thermogenetic experiments, refer to for further details. No ethics committee approval was needed for all the studies. First, flies were anesthetized on a thermoelectric cooled block maintained at a temperature of 1° to 2°C. Second, the thorax, dorsal surface, and wings of the fly were glued to a tungsten rod using dental cement (Coltene Whaledent SYNERGY D6 Flow A3.5/B3) and cured using high-intensity blue light (Radii Plus, Henry Schein Dental) for about 30 to 40 s. Further, dental cement was also applied to the necks to stabilize them and prevent lateral movement of the head during electrode insertion (see next section). Third, to prepare the fly for the multichannel overnight recording, we placed a sharpened fine wire made of platinum into the thorax (0.25 mm; A-M Systems). The platinum rod serves as a reference electrode and helps filter the noise originating from nonbrain sources. The insertion of a platinum electrode (while providing minimal discomfort to movement of animal) was done using a custom holder with a micromanipulator to enable targeted depth of insertion. For flies in the behavioral dataset, the procedure was the same, except that no reference wire was inserted. First, the tethered fly from the previous step was placed on an air supported ball (polystyrene) that served as a platform for walking/resting. Humidified air was delivered to the fly using a tube below the ball (also from the side) to prevent desiccation. Second, to record from half of the regions in the fly brain (half-brain probe) we used a 16-electrode linear silicon probe (model no. A1x16-3 mm 25-177, NeuroNexus Technologies). Third, the probe was inserted into the eye of the fly laterally using a micromanipulator (Merzhauser, Wetzlar, Germany). The probe was inserted such that the electrode sites faced the posterior side of the brain. The final electrode position (depth of insertion) was determined using the polarity reversal procedure described below. For flies recorded in the behavioral dataset, the setup was similar, except that a custom chamber was lowered over the ball and fly to maintain a humidified environment during recordings. Variability in spatial location of recording sites across different flies is a primary impediment when comparing data across different flies. This occurs mainly because of the angle and depth of insertion of the probe, both of which cannot be precisely controlled. To overcome this issue and to obtain comparable recording sites across flies, we designed a paradigm using visual evoked potentials (fig. S2). First, while the probe was being inserted from the periphery to the center of the brain, we used visual stimuli (square wave of 3 s in duration with 1-Hz frequency) from a blue light-emitting diode (LED). When the visual stimuli were displayed, we simultaneously recorded the LFPs from the 16 electrode sites. During the initial stage of insertion, most of the electrodes are outside of the brain, and only a few are inside the eye, optic lobe. The recordings in the electrodes inside the eye and the brain show a visual evoked potential corresponding to the leading edge and the trailing edge of the square wave. Second, we move the probe slowly toward the center of the brain so more of the electrode sites would now be inside the brain. Third, we notice that some electrodes have a negative deflection and some have a positive deflection with respect to the leading edge of the square wave. The electrodes in the eye, optic lobe regions, display a positive deflection, and electrodes further to the central parts of the brain display a negative deflection. However, this polarity change usually happens in the electrodes that are coincident on the regions right after the medulla. Fourth, for all flies, we made sure that the polarity change coincided with the electrodes 11 to 13 to establish consistency in terms of the spatial locations. To identify the possible locations in the brain targeted by the electrodes, we used a three-step procedure. In the first stage, we used immunohistochemistry to identify the locations of electrodes using a fluorescent dye and neuropils using antibodies against nc82 (presynaptic marker bruchpilot), respectively. In the second stage, we used a registration procedure to map the dye locations to an electron microscopy dataset (using nc82 images). In the third stage, we used principal components analysis to identify the precise neuropils targeted. Immunohistochemistry First, we labeled the probe with Texas red fluorescent dye conjugated to 10,000-Da molecular mass dextran dissolved in distilled water (Invitrogen) to identify the recording locations. Second, after removing the flies from the tether, the brains were dissected in 1× ice-cold phosphate-buffered saline (PBS) and fixed in 4% paraformaldehyde diluted in PBS-T (1× PBS and 0.2% Triton X-100) for 20 min in the dark to preserve the fluorescence of the dye. Third, after fixation, tissues were washed three times with PBS-T [with 0.01% sodium azide (Sigma-Aldrich)] and blocked for 1 hour in 10% goat serum (Sigma-Aldrich). Fourth, the brains were then incubated overnight in a primary antibody solution (mouse anti-nc82, Developmental Studies Hybridoma Bank; 1:20). Fifth, on the next day, brains were washed three times with PBS-T (10 min per wash) and incubated overnight in a secondary anti-body solution (1:250; goat anti-mouse Alexa Fluor 647). Last, the brain was washed in PBS-T and embedded in VECTASHIELD and imaged using a confocal microscope (Zeiss). Image registration First, for each fly, we used the nc82 image as source space to align to the JFRC2 template space [which is a spatially calibrated version of JFRC from FlyLight]. The registration process involved two steps: (i) rigid affine registration that roughly aligned the source image to the template space with 12 degrees of freedom (translation, rotation, and scaling); and (ii) nonrigid registration that allowed different brain regions to move independently with a smoothness penalty. The entire process was carried out using the CMTK plugin (FiJi toolbox) as described here . Second, we then used the JFRC2 (light-level) registration as bridging registration to FAFB14 (electron microscopy dataset) using the natverse toolbox and mapped both the nc82 images and the dye locations to the FAFB14 space. Electrode localization The electrode dye locations inside the brain are usually visible as fragments (points) instead of a single continuous (line) segment, mainly because the insertion of the probe causes the smearing of the dye on the neuropils in the brain. To identify the precise locations of the recording electrodes in the brain, we first used the points and performed principal component analysis to find the eigenvector or line (first principal component) that would have minimize the distance between the different points to the line itself. This line could be thought of as the main path of the probe as it entered into the brain. Next, we choose the innermost electrode as the projection of the innermost point (dye location) projected onto the eigenvector. The rest of the recording electrode sites were obtained by sampling the same eigenvector at intervals of 25 μm (which is the interelectrode distance on the probe) from the innermost point. First, we labeled the probe with Texas red fluorescent dye conjugated to 10,000-Da molecular mass dextran dissolved in distilled water (Invitrogen) to identify the recording locations. Second, after removing the flies from the tether, the brains were dissected in 1× ice-cold phosphate-buffered saline (PBS) and fixed in 4% paraformaldehyde diluted in PBS-T (1× PBS and 0.2% Triton X-100) for 20 min in the dark to preserve the fluorescence of the dye. Third, after fixation, tissues were washed three times with PBS-T [with 0.01% sodium azide (Sigma-Aldrich)] and blocked for 1 hour in 10% goat serum (Sigma-Aldrich). Fourth, the brains were then incubated overnight in a primary antibody solution (mouse anti-nc82, Developmental Studies Hybridoma Bank; 1:20). Fifth, on the next day, brains were washed three times with PBS-T (10 min per wash) and incubated overnight in a secondary anti-body solution (1:250; goat anti-mouse Alexa Fluor 647). Last, the brain was washed in PBS-T and embedded in VECTASHIELD and imaged using a confocal microscope (Zeiss). First, for each fly, we used the nc82 image as source space to align to the JFRC2 template space [which is a spatially calibrated version of JFRC from FlyLight]. The registration process involved two steps: (i) rigid affine registration that roughly aligned the source image to the template space with 12 degrees of freedom (translation, rotation, and scaling); and (ii) nonrigid registration that allowed different brain regions to move independently with a smoothness penalty. The entire process was carried out using the CMTK plugin (FiJi toolbox) as described here . Second, we then used the JFRC2 (light-level) registration as bridging registration to FAFB14 (electron microscopy dataset) using the natverse toolbox and mapped both the nc82 images and the dye locations to the FAFB14 space. The electrode dye locations inside the brain are usually visible as fragments (points) instead of a single continuous (line) segment, mainly because the insertion of the probe causes the smearing of the dye on the neuropils in the brain. To identify the precise locations of the recording electrodes in the brain, we first used the points and performed principal component analysis to find the eigenvector or line (first principal component) that would have minimize the distance between the different points to the line itself. This line could be thought of as the main path of the probe as it entered into the brain. Next, we choose the innermost electrode as the projection of the innermost point (dye location) projected onto the eigenvector. The rest of the recording electrode sites were obtained by sampling the same eigenvector at intervals of 25 μm (which is the interelectrode distance on the probe) from the innermost point. The LFP data from the 16-electrode probe were acquired using Tucker-Davis Technologies (Tucker-Davis Technologies, USA) multichannel data acquisition system at 25 kHz coupled with a RZ5 Bioamp processor and RP2.1 enhanced real-time processor. Data were acquired and amplified using a preamplifier (RA16PA/RA4PA Medusa PreAmp). The preamplifier used can only record data of up to 20 hours on a single charge cycle; hence, we limited the recording of the LFP signals to 20-hour duration. Further, as file sizes tend to be larger over longer recording periods, we recorded data in chunks of 1 hour, which was automatically controlled via a MATLAB script. The ball setup was illuminated with visible light, switched ON at 8 a.m. and switched OFF at 8 p.m. (mimicking the light/dark cycle conditions in the incubator). Further, we used infrared LEDs for monitoring the movement of the fly on the ball (which allowed us to quantify movements under both the light and the dark cycles. We recorded the fly in profile view with a digital camera from Scopetek (DCM 130E), and to achieve optical magnification, we used a zoom lens (from Navitar). As done previously , we removed the infrared filter in front of the camera sensor, to allow for filming under infrared light, thereby achieving constant illumination under both day and night. We made a custom script with Python (2.7.15) and OpenCV (3.4.2.17) that allowed for recording videos automatically and saving them in hourly intervals. The video was recorded with a resolution of 640 × 480 pixels at 30 frames/s using Xvid codec and further with additional metadata (time stamps in a csv file) that allowed a later matching up of the LFP data with the video data. The camera in this setup was a Point Grey/Teledyne FLIR Firefly perpendicular to the fly, in addition to an extra camera (Pro-MicroScan) placed on the trinocular output of a Nikon SZ7 stereomicroscope. This second camera was used to record a close-up view of the head of the fly for the purposes of tracking movements of the antennae. Illumination was as above with infrared LEDs, and recordings were obtained with the same Python scripts. The fly movement was quantified with the video files using Python (3.6.1) and OpenCV (3.4.9) in the following manner. First, every video file (1 per hour of recording) was read frame by frame. Second, for each frame, we clipped the image such that the main focus was on the fly while ignoring items in the background. Third, we converted the color space for each frame from BGR to grayscale. Fourth, we computed the “deltaframe” as the absolute difference of the current frame with the previous frame. Fifth, we thresholded the deltaframe using a custom defined threshold per fly and converted them into binary. Sixth, we dilated the thresholded image and identified contours in the dilated image and looped over the different contours selecting those above a specific threshold (area). Last, we drew rectangles around the contours above the threshold on the original (color) image to manually verify the movement location. Only those frames that had contours above threshold were regarded as “moved” frames, and other frames would be classified as “still.” Thus, each frame would be either 0 (still) or 1 (moved). In the next stage, we used the frame by frame movement data to identify segments of LFP data as sleep or awake in the following fashion. First, we synced the LFP data with the video data using the time stamps in both the LFP data and video metadata (csv files). Second, we clipped both the LFP and video data to the first 8 hours of recording. Though 23 flies survived for more than 12 hours, we only used the first 8 hours to ensure that the fly’s health was completely optimal (considering the circumstances) in both the behavior and brain recordings. Further, only 16 flies were used for the analysis, as 7 of them had issues with calibration (noisy or no calibration) or abnormal activity (either no sleep trials or very active). Third, we pruned movement data to ensure that brief noise in movements is avoided. Fourth, we identified the segments of data wherein the fly was immobile for more than 5 min as sleep and the segment immediately preceding 2 min before the sleep data as presleep and the rest of the data as awake. To identify whether the fly activity in our recordings followed a crepuscular pattern, first, we computed the movement pattern as proportion of frames moved per minute within these periods. Second, we divided the movement patterns across six different periods: (i) predawn: 5 to 7 a.m., (ii) dawn: 7 to 9 a.m., (iii) postdawn: 9 to 11 a.m., (iv) predusk: 5 to 7 p.m., (v) dusk: 7 to 9 p.m., and (vi) postdusk: 9 to 11 p.m. Third, we computed the z score of the movement pattern for normalization purposes, thus ending up with movement pattern per minute of the above mentioned time periods per fly. Preprocessing LFP data were analyzed with custom-made scripts in MATLAB (MathWorks) using EEGLAB toolbox . The preprocessing steps were as follows: First, the binary data were extracted for every hour from Tucker-Davis Technologies “tank” file format to MATLAB “mat” file format. Second, the data were resampled to 250 Hz and bandpass-filtered with zero phase shift between 0.5 and 40 Hz using hamming windowed-sinc FIR filter, and further line noise at 50 Hz was removed using a notch filter. Third, the hourly LFP data were saved to EEGLAB “.set” file format. Fourth, the hourly LFP data were interpolated in a linear way to avoid any discontinuities between the hourly segments of data. Fifth, the movement data (see the “Movement analysis” section) were added to the EEGLAB file along with the start and end time based on video data. Sixth, the multihour LFP data (along with the movement data) were collated for the first 8 hours of the recording. Seventh, we created separate epochs based on movement data into sleep, presleep, and awake [where preceding 2 min of immobility (−2 to 0 min) is presleep, immobility is sleep, and the rest of the data is awake, here, 0 min is the start of the immobility]. Eighth, the epochs were now re-referenced on the basis of the channel where the polarity reversal occurred. For this, we identified the channel wherein the polarity reversal occurred (see the “Polarity reversal” section) and subtracted all the channels from this channel, thus resulting in 15 channels after the re-referencing. This brain-based referencing technique (similar to the Cz-based reference in human EEG recordings) allows for filtering of nonbrain-based physiological noise components (such as heartbeat, etc.). Previous multichannel recordings used only the thorax-based referencing (followed by bipolar referencing) along with independent component analysis to remove physiological noises. However, the identification of noise components such as heartbeat, etc. from independent component analysis is subjective and further depends on the expertise of the human curator. Our technique overcomes these issues while simultaneously providing a method to remove physiological noises not originating from the brain. Power spectrum analysis (sleep versus wake) The power spectra of the LFP data were computed for each fly in the following fashion. First, each condition (“wake” and sleep) of varying duration was reepoched into trials of 60-s duration. Second, each trial was bandpass-filtered with zero phase shift between 5 and 40 Hz using hamming windowed-sinc FIR filter. Third, for each trial, power spectra (in decibels) were computed using the “spectopo” function in the EEGLAB toolbox in MATLAB. Fourth, the mean power spectra for all the trials per condition per fly were computed. The goal of the power spectra analysis was to identify the cluster of frequency bands and channels that differ across the sleep and wake periods at the group level. To perform these group level comparisons (sleep versus wake periods), we only used flies that had at least 10 trials under each condition. To identify the differences across wake and sleep periods, we used cluster-based permutation tests. Cluster-based permutation tests are a nonparametric way of testing difference across conditions in an N -dimensional space (here, frequencies × channels) while still allowing for the multiple comparison problems to be solved without reducing the statistical power of the test. The outcome of such a test would be significant cluster(s), which, in our case, would be an ROI across frequencies × channels. Thus, we performed a cluster permutation test (flies × frequencies × channels) using MNE (0.22.0) in Python (permutation_cluster_1samp_test) with all possible permutations to identify clusters (ROIs in frequencies × channel space) that differ across awake and sleep periods. We also computed the effect sizes for every channel × frequency combination using Cohen’s d measure (difference of means/SD). LFP data were analyzed with custom-made scripts in MATLAB (MathWorks) using EEGLAB toolbox . The preprocessing steps were as follows: First, the binary data were extracted for every hour from Tucker-Davis Technologies “tank” file format to MATLAB “mat” file format. Second, the data were resampled to 250 Hz and bandpass-filtered with zero phase shift between 0.5 and 40 Hz using hamming windowed-sinc FIR filter, and further line noise at 50 Hz was removed using a notch filter. Third, the hourly LFP data were saved to EEGLAB “.set” file format. Fourth, the hourly LFP data were interpolated in a linear way to avoid any discontinuities between the hourly segments of data. Fifth, the movement data (see the “Movement analysis” section) were added to the EEGLAB file along with the start and end time based on video data. Sixth, the multihour LFP data (along with the movement data) were collated for the first 8 hours of the recording. Seventh, we created separate epochs based on movement data into sleep, presleep, and awake [where preceding 2 min of immobility (−2 to 0 min) is presleep, immobility is sleep, and the rest of the data is awake, here, 0 min is the start of the immobility]. Eighth, the epochs were now re-referenced on the basis of the channel where the polarity reversal occurred. For this, we identified the channel wherein the polarity reversal occurred (see the “Polarity reversal” section) and subtracted all the channels from this channel, thus resulting in 15 channels after the re-referencing. This brain-based referencing technique (similar to the Cz-based reference in human EEG recordings) allows for filtering of nonbrain-based physiological noise components (such as heartbeat, etc.). Previous multichannel recordings used only the thorax-based referencing (followed by bipolar referencing) along with independent component analysis to remove physiological noises. However, the identification of noise components such as heartbeat, etc. from independent component analysis is subjective and further depends on the expertise of the human curator. Our technique overcomes these issues while simultaneously providing a method to remove physiological noises not originating from the brain. The power spectra of the LFP data were computed for each fly in the following fashion. First, each condition (“wake” and sleep) of varying duration was reepoched into trials of 60-s duration. Second, each trial was bandpass-filtered with zero phase shift between 5 and 40 Hz using hamming windowed-sinc FIR filter. Third, for each trial, power spectra (in decibels) were computed using the “spectopo” function in the EEGLAB toolbox in MATLAB. Fourth, the mean power spectra for all the trials per condition per fly were computed. The goal of the power spectra analysis was to identify the cluster of frequency bands and channels that differ across the sleep and wake periods at the group level. To perform these group level comparisons (sleep versus wake periods), we only used flies that had at least 10 trials under each condition. To identify the differences across wake and sleep periods, we used cluster-based permutation tests. Cluster-based permutation tests are a nonparametric way of testing difference across conditions in an N -dimensional space (here, frequencies × channels) while still allowing for the multiple comparison problems to be solved without reducing the statistical power of the test. The outcome of such a test would be significant cluster(s), which, in our case, would be an ROI across frequencies × channels. Thus, we performed a cluster permutation test (flies × frequencies × channels) using MNE (0.22.0) in Python (permutation_cluster_1samp_test) with all possible permutations to identify clusters (ROIs in frequencies × channel space) that differ across awake and sleep periods. We also computed the effect sizes for every channel × frequency combination using Cohen’s d measure (difference of means/SD). The thermogenetic sleep induction data were collected using 104y-Gal4/UAS-TrpA1 lines as part of the study . This multichannel recording consisted of a 16-electrode full-brain probe (model no. A1x16-3 mm50-177, NeuroNexus Technologies) covering the whole of the brain (fig. S7B) (in contrast to the half-brain probe mentioned before) with an interelectrode distance of 50 μm. The rest of the recording parameters were the same as mentioned in the previous section. Sleep induction was achieved by transient activation of this circuit, as described in . Preprocessing LFP data were analyzed with custom-made scripts in MATLAB (MathWorks) using EEGLAB as mentioned before. The preprocessing steps were as follows: First, the LFP data per condition (“baseline,” “sleep induction,” and “recovery”) were converted to EEGLAB .set file format with a sampling rate of 1 kHz. Second, the LFP data were re-referenced using a differential approach, wherein nearby channels are subtracted with each other resulting in 15 channels. Power spectrum analysis (baseline versus sleep induction) The power spectra of the LFP data were computed for each fly in the following fashion. First, each condition (baseline and sleep induction) was reepoched into trials of 1-s duration. Second, each trial was bandpass-filtered with zero phase shift between 5 and 40 Hz using hamming windowed-sinc FIR filter. Third, for each trial, power spectra (in decibels) were computed using the spectopo function in the EEGLAB toolbox in MATLAB. Fourth, the mean power spectra for all the trials per condition per fly were computed. The group level comparison was performed using cluster permutation test methods (as described in previous sections) to identify differences in frequency × channels across baseline and sleep induction conditions. LFP data were analyzed with custom-made scripts in MATLAB (MathWorks) using EEGLAB as mentioned before. The preprocessing steps were as follows: First, the LFP data per condition (“baseline,” “sleep induction,” and “recovery”) were converted to EEGLAB .set file format with a sampling rate of 1 kHz. Second, the LFP data were re-referenced using a differential approach, wherein nearby channels are subtracted with each other resulting in 15 channels. The power spectra of the LFP data were computed for each fly in the following fashion. First, each condition (baseline and sleep induction) was reepoched into trials of 1-s duration. Second, each trial was bandpass-filtered with zero phase shift between 5 and 40 Hz using hamming windowed-sinc FIR filter. Third, for each trial, power spectra (in decibels) were computed using the spectopo function in the EEGLAB toolbox in MATLAB. Fourth, the mean power spectra for all the trials per condition per fly were computed. The group level comparison was performed using cluster permutation test methods (as described in previous sections) to identify differences in frequency × channels across baseline and sleep induction conditions. The main goal of this analysis was to use classifiers to identify the existence of sleep stages using LFP data. Labeling of sleep states Here, we relabeled the segments of data (already identified as sleep and awake based on movement data) in the following fashion. First, we labeled the segments of data in the first 2 min (0 to 2 min) after the start of immobility as earlysleep and the segments of the data in the preceding 2 min (−2 to 0 min) as presleep. Second, we labeled the segments of data in the last 2 min of sleep as latesleep and the segments of data in between the earlysleep and latesleep as midsleep. The rest of the data are considered as awake. Preprocessing and power spectrum computation The preprocessing steps were the same as mentioned in the previous section (LFP preprocessing). For the computation of the power spectrum, we followed similar procedures as mentioned before; however, we saved the individual power spectrum per trial (channels × frequency) per fly in a csv file along with the corresponding label of the sleep state. Classifier probability analysis We implemented an SVM-based classifier using scikit-learn (0.24.2) to classify the LFP data using the following steps. First, we collated the features based on power spectrum (channels × frequency) from all the flies across different sleep states. Second, we filtered the features to only awake (5106 epochs) and midsleep (1165 epochs) states. Here, we also did not feed (for training) the preceding 2 min of presleep, succeeding 2 min of earlysleep, and the last 2 min of sleep latesleep into the classifier (we used those minutes for sanity check purposes; refer to ). Third, we encoded the target labels (awake and midsleep) into binary states using “LabelEncoder” from scikit-learn. Fourth, we balanced the composition of labels (or classes) to prevent bias due to unequal distribution of classes in the training dataset. Fifth, we divided the dataset into train and test sets (80% train and 20% test) using “train_test_split” from scikit-learn in a stratified fashion. Sixth, we subjected both the train and test data to a standard scaler using “StandardScaler” from scikit-learn, which removes the mean of the data and scales it by the variance. Seventh, we implemented an SVM-based classifier using a “linear” kernel along with probability estimates per class and fit the classifier to the train dataset. Eighth, we used the trained classifier on the test dataset and computed different metrics of classifier performance such as accuracy, roc_auc, recall, precision, and F1 score using “metrics” from scikit-learn (fig. S8B). Ninth, we used the trained classifier on all class labels (awake, presleep, earlysleep, midsleep, latesleep, preceding 2 min of presleep, and succeeding 2 min of latesleep) from the original dataset and computed the probability estimates per class. Note that none of the presleep, earlysleep, latesleep, preceding 2 min of presleep, and succeeding 2 min of latesleep data have not been seen by the classifier beforehand. The above process from step 5 onward is repeated a further four times with different test and train splits to create five different iterations of classifiers and performance metrics. Multiclass random forest classifier analysis and feature importance To identify differences across multiple classes (awake, presleep, earlysleep, midsleep, and latesleep), we implemented a random forest classifier using scikit-learn (0.24.2) to classify the LFP data using the following steps. First, we collated the features based on power spectrum (channels × frequency) from all the flies across different sleep states. Second, as the different labels (or classes) were unbalanced for: awake (5585 epochs), presleep (258 epochs), earlysleep (262 epochs), midsleep (1165 epochs), and latesleep (262 epochs), we used SMOTE (synthetic minority oversampling technique) from imblearn (0.8.1) to balance the distribution of classes in the dataset. Third, we divided the dataset into train and test sets (80% train and 20% test) using train_test_split from scikit-learn in a stratified fashion. Fourth, we subjected both the train and test data to a standard scaler using StandardScaler from scikit-learn, as mentioned in the previous section. Fifth, we encoded the target labels into binary states using “LabelBinarizer” from scikit-learn. Sixth, we implemented a random forest classifier for this multiclass classification problem. As the random forest classifier has multiple hyperparameters that need to be tuned, we first used a random grid (using “RandomizedSearchCV” from scikit-learn) to search for the hyperparameters and then further used these parameters in a grid search model (using “GridSearchCV” from scikit-learn) to identify the best hyperparameters. Seventh, we used the trained classifier on the test dataset and computed different metrics of classifier performance such as recall, precision, F1 score, etc. using metrics from scikit-learn separately for all the five classes. Furthermore, we also computed a normalized confusion matrix using “confusion_matrix” from scikit-learn. The above process from step 5 onward is repeated a further four times with different test and train splits to create five different iterations of classifiers and performance metrics. Last, to identify and rank the importance of different features, we used the permutation importance metric (using “permutation_importance” from scikit-learn). The permutation feature importance works by randomly shuffling a single feature value and further identifying the decrease in the model score . The process breaks the relationship between the shuffled feature and the target; thus, if the feature is very important, it would be indicated by a high drop in model score; on the other hand, if it is relatively unimportant, then the model score would not be affected so much. We used the permutation importance with a repeat of 5, and for each train/test split, we computed a permutation importance score. Last, the mean permutation importance score was computed using all the splits. The procedure for differentiating across daysleep and nightsleep periods was the same except the target classification was across daysleep (917 epochs) and nightsleep (770 epochs) classes. Classifier metrics The performance of the abovementioned classifiers (both SVM-based and random forest–based) was evaluated using metrics such as accuracy, recall, precision, roc_auc, and F1 score. The definitions of these metrics are as follows: 1) Recall: This refers to the ability of a classifier to correctly detect the true class of the epoch among the classifications made. It is obtained by the (TP/TP + FN). It is also known as sensitivity. TP indicates true positives, and FN indicates false negatives. 2) Precision: This refers to the exactness of the classifier. It is obtained by the (TP/TP + FP). 3) F1 score: This refers to the harmonic mean between precision and recall. 4) roc_auc: This refers to the area under the receiver operating curve. In general, it refers to how efficient the classifier is in identifying different epochs. Scores closer to 1 indicate a highly efficient classifier, whereas those closer to 0 indicate otherwise. 5) Accuracy: This is defined as the number of correctly classified epochs divided by the overall number of epochs classified. 6) Confusion matrix: This enables visualization of the classifier performance, by tabulating the predicted classes against actual classes. For multiclass problems (random forest classifiers here), the values in the diagonal indicate where the predicted and actual classes converge, whereas those on the off-diagonal indicate misclassifications. Here, we relabeled the segments of data (already identified as sleep and awake based on movement data) in the following fashion. First, we labeled the segments of data in the first 2 min (0 to 2 min) after the start of immobility as earlysleep and the segments of the data in the preceding 2 min (−2 to 0 min) as presleep. Second, we labeled the segments of data in the last 2 min of sleep as latesleep and the segments of data in between the earlysleep and latesleep as midsleep. The rest of the data are considered as awake. The preprocessing steps were the same as mentioned in the previous section (LFP preprocessing). For the computation of the power spectrum, we followed similar procedures as mentioned before; however, we saved the individual power spectrum per trial (channels × frequency) per fly in a csv file along with the corresponding label of the sleep state. We implemented an SVM-based classifier using scikit-learn (0.24.2) to classify the LFP data using the following steps. First, we collated the features based on power spectrum (channels × frequency) from all the flies across different sleep states. Second, we filtered the features to only awake (5106 epochs) and midsleep (1165 epochs) states. Here, we also did not feed (for training) the preceding 2 min of presleep, succeeding 2 min of earlysleep, and the last 2 min of sleep latesleep into the classifier (we used those minutes for sanity check purposes; refer to ). Third, we encoded the target labels (awake and midsleep) into binary states using “LabelEncoder” from scikit-learn. Fourth, we balanced the composition of labels (or classes) to prevent bias due to unequal distribution of classes in the training dataset. Fifth, we divided the dataset into train and test sets (80% train and 20% test) using “train_test_split” from scikit-learn in a stratified fashion. Sixth, we subjected both the train and test data to a standard scaler using “StandardScaler” from scikit-learn, which removes the mean of the data and scales it by the variance. Seventh, we implemented an SVM-based classifier using a “linear” kernel along with probability estimates per class and fit the classifier to the train dataset. Eighth, we used the trained classifier on the test dataset and computed different metrics of classifier performance such as accuracy, roc_auc, recall, precision, and F1 score using “metrics” from scikit-learn (fig. S8B). Ninth, we used the trained classifier on all class labels (awake, presleep, earlysleep, midsleep, latesleep, preceding 2 min of presleep, and succeeding 2 min of latesleep) from the original dataset and computed the probability estimates per class. Note that none of the presleep, earlysleep, latesleep, preceding 2 min of presleep, and succeeding 2 min of latesleep data have not been seen by the classifier beforehand. The above process from step 5 onward is repeated a further four times with different test and train splits to create five different iterations of classifiers and performance metrics. To identify differences across multiple classes (awake, presleep, earlysleep, midsleep, and latesleep), we implemented a random forest classifier using scikit-learn (0.24.2) to classify the LFP data using the following steps. First, we collated the features based on power spectrum (channels × frequency) from all the flies across different sleep states. Second, as the different labels (or classes) were unbalanced for: awake (5585 epochs), presleep (258 epochs), earlysleep (262 epochs), midsleep (1165 epochs), and latesleep (262 epochs), we used SMOTE (synthetic minority oversampling technique) from imblearn (0.8.1) to balance the distribution of classes in the dataset. Third, we divided the dataset into train and test sets (80% train and 20% test) using train_test_split from scikit-learn in a stratified fashion. Fourth, we subjected both the train and test data to a standard scaler using StandardScaler from scikit-learn, as mentioned in the previous section. Fifth, we encoded the target labels into binary states using “LabelBinarizer” from scikit-learn. Sixth, we implemented a random forest classifier for this multiclass classification problem. As the random forest classifier has multiple hyperparameters that need to be tuned, we first used a random grid (using “RandomizedSearchCV” from scikit-learn) to search for the hyperparameters and then further used these parameters in a grid search model (using “GridSearchCV” from scikit-learn) to identify the best hyperparameters. Seventh, we used the trained classifier on the test dataset and computed different metrics of classifier performance such as recall, precision, F1 score, etc. using metrics from scikit-learn separately for all the five classes. Furthermore, we also computed a normalized confusion matrix using “confusion_matrix” from scikit-learn. The above process from step 5 onward is repeated a further four times with different test and train splits to create five different iterations of classifiers and performance metrics. Last, to identify and rank the importance of different features, we used the permutation importance metric (using “permutation_importance” from scikit-learn). The permutation feature importance works by randomly shuffling a single feature value and further identifying the decrease in the model score . The process breaks the relationship between the shuffled feature and the target; thus, if the feature is very important, it would be indicated by a high drop in model score; on the other hand, if it is relatively unimportant, then the model score would not be affected so much. We used the permutation importance with a repeat of 5, and for each train/test split, we computed a permutation importance score. Last, the mean permutation importance score was computed using all the splits. The procedure for differentiating across daysleep and nightsleep periods was the same except the target classification was across daysleep (917 epochs) and nightsleep (770 epochs) classes. The performance of the abovementioned classifiers (both SVM-based and random forest–based) was evaluated using metrics such as accuracy, recall, precision, roc_auc, and F1 score. The definitions of these metrics are as follows: 1) Recall: This refers to the ability of a classifier to correctly detect the true class of the epoch among the classifications made. It is obtained by the (TP/TP + FN). It is also known as sensitivity. TP indicates true positives, and FN indicates false negatives. 2) Precision: This refers to the exactness of the classifier. It is obtained by the (TP/TP + FP). 3) F1 score: This refers to the harmonic mean between precision and recall. 4) roc_auc: This refers to the area under the receiver operating curve. In general, it refers to how efficient the classifier is in identifying different epochs. Scores closer to 1 indicate a highly efficient classifier, whereas those closer to 0 indicate otherwise. 5) Accuracy: This is defined as the number of correctly classified epochs divided by the overall number of epochs classified. 6) Confusion matrix: This enables visualization of the classifier performance, by tabulating the predicted classes against actual classes. For multiclass problems (random forest classifiers here), the values in the diagonal indicate where the predicted and actual classes converge, whereas those on the off-diagonal indicate misclassifications. Pose detection We used DeepLabCut to track the different body parts of the fly using an artificial neural network trained in the following fashion. First, we extracted frames from sample videos, wherein the fly performs the following: normal walking movement on the ball (“all_body”) and PE periods (“proboscis”), both while asleep and awake. For each fly, we extracted videos of the abovementioned categories for the purpose of creating annotation labels. Second, we extracted frames from these videos and further labeled the different body parts: eye, proboscis, leg1_tip, leg1_joint, leg3_tip, leg3_joint, and abdomen . Third, we trained the neural network per fly using this dataset with “resnet_50” weights until the loss parameter during training stabilizes. The performance of the network per fly (train and test error in pixels) was in general similar in both the train and test datasets. Fourth, we evaluated the annotation performance manually by labeling a test video and verifying the same. Last, this trained network (per fly) was used for annotating the video for the first 9 hours of the recording. Pose analysis In the next step, we use the pose detection output to design a classifier capable of identifying PE periods. First, we manually detected several sample time points (to be used as ground truth for training/testing the classifier) in the video of each fly, identified proboscis time periods, and saved them in a “csv” file. Second, we used the pose tracking data ( x , y likelihood) for the body parts of the proboscis, leg1_tip, leg1_joint, eye, and abdomen and further computed low pass–filtered data (0.1-Hz butterworth filter) of each body part. Further, we also computed the moving average (window length of five samples) of the filtered data. Third, we computed “dist_eyeprob” as the Euclidean distance between the proboscis and eye body part and lastly multiplied the same with the likelihood of the proboscis body part. Fourth, we used the abovementioned body parts (and its derivatives) as features and used the StandardScaler from scikit-learn for normalizing the data. Fifth, we divided the dataset into train and test sets (70% train and 30% test) using train_test_split from scikit-learn. Sixth, we implemented an SVM-based classifier using an “rbf” kernel and fit the classifier to the train dataset. Seventh, we used the trained classifier on the test dataset and computed different metrics of classifier performance such as accuracy, recall, precision, etc. using metrics from scikit-learn. The data segments (frames) identified here will be used to construct the candidate proboscis periods, which then will be further refined in the next steps. Proboscis detection First, we use the frames identified by the classifier from the previous section and construct continuous segments to identify time periods of probable proboscis periods. Further, we add additional time periods using the likelihood of the proboscis part with a threshold-based method. Second, we identify the peak frame (where the maximum displacement of the proboscis occurs) in each PE event (each proboscis bout consists of multiple PE events) and save the identified proboscis events (frame number, time, and behavior state) to a csv file. Third, each event in the csv file is manually verified, and only true events are further taken forward. This process is repeated for all the flies, and the proboscis detection accuracy per fly is plotted in . We used DeepLabCut to track the different body parts of the fly using an artificial neural network trained in the following fashion. First, we extracted frames from sample videos, wherein the fly performs the following: normal walking movement on the ball (“all_body”) and PE periods (“proboscis”), both while asleep and awake. For each fly, we extracted videos of the abovementioned categories for the purpose of creating annotation labels. Second, we extracted frames from these videos and further labeled the different body parts: eye, proboscis, leg1_tip, leg1_joint, leg3_tip, leg3_joint, and abdomen . Third, we trained the neural network per fly using this dataset with “resnet_50” weights until the loss parameter during training stabilizes. The performance of the network per fly (train and test error in pixels) was in general similar in both the train and test datasets. Fourth, we evaluated the annotation performance manually by labeling a test video and verifying the same. Last, this trained network (per fly) was used for annotating the video for the first 9 hours of the recording. In the next step, we use the pose detection output to design a classifier capable of identifying PE periods. First, we manually detected several sample time points (to be used as ground truth for training/testing the classifier) in the video of each fly, identified proboscis time periods, and saved them in a “csv” file. Second, we used the pose tracking data ( x , y likelihood) for the body parts of the proboscis, leg1_tip, leg1_joint, eye, and abdomen and further computed low pass–filtered data (0.1-Hz butterworth filter) of each body part. Further, we also computed the moving average (window length of five samples) of the filtered data. Third, we computed “dist_eyeprob” as the Euclidean distance between the proboscis and eye body part and lastly multiplied the same with the likelihood of the proboscis body part. Fourth, we used the abovementioned body parts (and its derivatives) as features and used the StandardScaler from scikit-learn for normalizing the data. Fifth, we divided the dataset into train and test sets (70% train and 30% test) using train_test_split from scikit-learn. Sixth, we implemented an SVM-based classifier using an “rbf” kernel and fit the classifier to the train dataset. Seventh, we used the trained classifier on the test dataset and computed different metrics of classifier performance such as accuracy, recall, precision, etc. using metrics from scikit-learn. The data segments (frames) identified here will be used to construct the candidate proboscis periods, which then will be further refined in the next steps. First, we use the frames identified by the classifier from the previous section and construct continuous segments to identify time periods of probable proboscis periods. Further, we add additional time periods using the likelihood of the proboscis part with a threshold-based method. Second, we identify the peak frame (where the maximum displacement of the proboscis occurs) in each PE event (each proboscis bout consists of multiple PE events) and save the identified proboscis events (frame number, time, and behavior state) to a csv file. Third, each event in the csv file is manually verified, and only true events are further taken forward. This process is repeated for all the flies, and the proboscis detection accuracy per fly is plotted in . Here, the same method for tracking microbehaviors via DeepLabCut was used, focusing on the proboscis and abdomen for the lateral camera view (see above) and the base and tip of the left and right antennae for the dorsal view of the fly head. The data from these two streams were imported into a custom MATLAB (2020a) script, which performed synchronization based on the integrated time stamps. After preprocessing, antennal tracking with DeepLabCut was converted into an angle for both respective antennae by calculation of the respective positions of the bases and tips, with the angle of the fly’s head with respect to the camera automatically derived from these data and used to correct the angle of the antennae. For the proboscis, a median position was calculated for each recording—assumed to be the resting position—and the distance and angle between the proboscis at any given time point, and this median position was calculated. Extensions of the proboscis were derived from these distance data with the “findpeaks” function in MATLAB, with a number of exclusion criteria applied to remove tracking artifacts. For example, detected peaks that exceeded a biologically plausible distance threshold, lasted only for a single frame, or had an implausible instantaneous rise time were excluded. Since this method could potentially be biased toward identifying proboscis activity that follows a prototypical shape, we also used an alternative proboscis event detection based purely on the current distance of the proboscis from resting. In this, we used a manually set threshold for each fly to detect portions in the recording when the proboscis was extended versus not, and for these “events,” we calculated the duration and median angle of the proboscis during the span of the event. Periods of antennal periodicity in recordings were calculated on the basis of a fast Fourier transform and applied to time segments of recordings. Since proboscis activity was not sinusoidal in nature (and thus would behave poorly if subjected to a fast Fourier transform), periodicity for this organ was calculated manually as a factor of timing between individual PEs in that PEs were periodic if they occurred less than 6 s after a preceding PE. This value was selected from observation of typical inter-PE intervals. The main goal of this analysis was to identify the spectral signatures associated with the PE periods across awake and sleep states in the LFP data. Identification of proboscis periods First, we used the csv file containing frame by frame detection of manually verified proboscis events (from the section above). Second, we identify periods of PEs which are close together (within 10 s of each other) and label them as continuous periods. Third, we add activity labels such as awake (awake periods without any proboscis activity), “awakeprob” (awake periods with proboscis activity), sleep (sleep periods without any proboscis activity), “sleepprob” (sleep periods with proboscis activity), presleep (presleep periods without any proboscis activity), and “presleepprob” (presleep periods with proboscis activity) based on annotated behaviors. Fourth, we extract the LFP data corresponding to the different time periods across each fly. Power spectrum analysis The preprocessing steps for the extracted LFP data were the same as mentioned in the previous section (LFP preprocessing). For the computation of the power spectrum, we followed similar procedures as mentioned before; however, we computed the individual power spectrum per trial (channels × frequency) per fly by reepoching them into trials of 1 s in duration (instead of the 60-s periods for sleep analysis, as the proboscis periods are usually shorter). Then, the mean power spectrum for all the trials per condition per fly was computed. Next, we performed cluster permutation tests (flies × frequencies × channels) for identifying the differences across frequencies and channels across different conditions. For this analysis we only used flies that had at least 50 trials under each condition. First, we used the csv file containing frame by frame detection of manually verified proboscis events (from the section above). Second, we identify periods of PEs which are close together (within 10 s of each other) and label them as continuous periods. Third, we add activity labels such as awake (awake periods without any proboscis activity), “awakeprob” (awake periods with proboscis activity), sleep (sleep periods without any proboscis activity), “sleepprob” (sleep periods with proboscis activity), presleep (presleep periods without any proboscis activity), and “presleepprob” (presleep periods with proboscis activity) based on annotated behaviors. Fourth, we extract the LFP data corresponding to the different time periods across each fly. The preprocessing steps for the extracted LFP data were the same as mentioned in the previous section (LFP preprocessing). For the computation of the power spectrum, we followed similar procedures as mentioned before; however, we computed the individual power spectrum per trial (channels × frequency) per fly by reepoching them into trials of 1 s in duration (instead of the 60-s periods for sleep analysis, as the proboscis periods are usually shorter). Then, the mean power spectrum for all the trials per condition per fly was computed. Next, we performed cluster permutation tests (flies × frequencies × channels) for identifying the differences across frequencies and channels across different conditions. For this analysis we only used flies that had at least 50 trials under each condition. Models for antennal and proboscis periodicity We defined two different multilevel models (tables S1, S3, and S5, left and right antenna and proboscis) to understand how the likelihood of periodicity varies by sleep epoch. In the null model, the periodicity depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (epoch model), the periodicity depends only on the epoch (fixed effect) and the fly ID (random effect). These models were fit using the “lmer” function (“lmerTest” package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last the winning model was analyzed using the “anova” function (tables S2, S4, and S6, left and right antenna and proboscis) in R . Models for movement pattern across crepuscular periods We defined two different multilevel models separately for dawn and dusk periods (tables S7 and S9, movement pattern in dawn and dusk periods) to understand how the movement pattern of the flies varies by different twilight hours. In the null model, the movement depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (crepuscular-type model), the movement depends only on the crepuscular-type (fixed effect) and the fly ID (random effect). These models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the winning model was analyzed using the anova function (tables S8 and S10, movement pattern in dawn and dusk periods) in R . Models for movement pattern across recorded hours We defined two different multilevel models (table S11, movement pattern across recorded hours) to understand how the movement of the flies (thereby health) varies by different recording hours. In the null model, the movement depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (recorded hour model), the movement depends only on the recorded hour (fixed effect) and the fly ID (random effect). These models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the winning model was analyzed using the anova function (table S12, movement pattern across recorded hours) in R . Models for LFP power spectrum across recorded hours We defined two different multilevel models separately for awake and sleep periods (tables S13 and S15, LFP power spectrum in awake and sleep periods across recorded hours) to understand how the different recording hours (thereby consistency of recordings) affected the LFP power spectrum. In the null model, the LFP power spectrum depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (recorded hour model), the LFP power spectrum depends only on the recorded hour (fixed effect) and the fly ID (random effect). These models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the winning model was analyzed using the anova function (table S14, LFP power spectrum in awake periods across recorded hours; for the sleep periods, the winning model was the null model, so was not analyzed further) in R . Models for spectral analysis We defined four different multilevel models (table S16) to understand the modulation of the power spectrum by sleep epoch and channel type. In the null model, the power spectrum depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (epoch model), the power spectrum depends only on the LFP epoch type (fixed effect) and the fly ID (random effect). In the third model (channel model), the power spectrum depends only on the channel type (fixed effect) and the fly ID (random effect). In the fourth model (epoch channel model), the power spectrum depends on a combination of the LFP epoch type and the channel type, both used as fixed effects and the fly ID (random effect). These four models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the top two winning models were compared against each other using anova function in R , to validate whether the winning model (if it is more complex) is actually better than the losing model (if it is simpler). The epoch channel model emerged as the winning model, indicating an important contribution from different channels. The epoch channel was further analyzed with the anova function (table S17) in R . Models for PE event counts We defined two different multilevel models (table S18) to understand the modulation of PE event count by sleep epochs. In the null model, the PE event count depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (time_label model), the PE event count depends only on the specific temporal sleep stage (fixed effect) and the fly ID (random effect). These two models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Thus, the time_label model emerged as the winning model. The time_label model was further analyzed with the anova function (table S19) in R . We defined two different multilevel models (tables S1, S3, and S5, left and right antenna and proboscis) to understand how the likelihood of periodicity varies by sleep epoch. In the null model, the periodicity depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (epoch model), the periodicity depends only on the epoch (fixed effect) and the fly ID (random effect). These models were fit using the “lmer” function (“lmerTest” package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last the winning model was analyzed using the “anova” function (tables S2, S4, and S6, left and right antenna and proboscis) in R . We defined two different multilevel models separately for dawn and dusk periods (tables S7 and S9, movement pattern in dawn and dusk periods) to understand how the movement pattern of the flies varies by different twilight hours. In the null model, the movement depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (crepuscular-type model), the movement depends only on the crepuscular-type (fixed effect) and the fly ID (random effect). These models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the winning model was analyzed using the anova function (tables S8 and S10, movement pattern in dawn and dusk periods) in R . We defined two different multilevel models (table S11, movement pattern across recorded hours) to understand how the movement of the flies (thereby health) varies by different recording hours. In the null model, the movement depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (recorded hour model), the movement depends only on the recorded hour (fixed effect) and the fly ID (random effect). These models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the winning model was analyzed using the anova function (table S12, movement pattern across recorded hours) in R . We defined two different multilevel models separately for awake and sleep periods (tables S13 and S15, LFP power spectrum in awake and sleep periods across recorded hours) to understand how the different recording hours (thereby consistency of recordings) affected the LFP power spectrum. In the null model, the LFP power spectrum depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (recorded hour model), the LFP power spectrum depends only on the recorded hour (fixed effect) and the fly ID (random effect). These models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the winning model was analyzed using the anova function (table S14, LFP power spectrum in awake periods across recorded hours; for the sleep periods, the winning model was the null model, so was not analyzed further) in R . We defined four different multilevel models (table S16) to understand the modulation of the power spectrum by sleep epoch and channel type. In the null model, the power spectrum depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (epoch model), the power spectrum depends only on the LFP epoch type (fixed effect) and the fly ID (random effect). In the third model (channel model), the power spectrum depends only on the channel type (fixed effect) and the fly ID (random effect). In the fourth model (epoch channel model), the power spectrum depends on a combination of the LFP epoch type and the channel type, both used as fixed effects and the fly ID (random effect). These four models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Last, the top two winning models were compared against each other using anova function in R , to validate whether the winning model (if it is more complex) is actually better than the losing model (if it is simpler). The epoch channel model emerged as the winning model, indicating an important contribution from different channels. The epoch channel was further analyzed with the anova function (table S17) in R . We defined two different multilevel models (table S18) to understand the modulation of PE event count by sleep epochs. In the null model, the PE event count depends only on the mean per fly (fixed effect) and the fly ID (random effect). In the second model (time_label model), the PE event count depends only on the specific temporal sleep stage (fixed effect) and the fly ID (random effect). These two models were fit using the lmer function (lmerTest package) in R , and the winning model is identified as the one with the highest log-likelihood by comparing it with the null model and performing a likelihood ratio chi-square test (χ 2 ). Thus, the time_label model emerged as the winning model. The time_label model was further analyzed with the anova function (table S19) in R .
The influence of platform switching and platform matching on marginal bone loss in immediately inserted dental implants: a retrospective clinical study
a8d76a2a-773d-4ec1-b19c-308b79b54a12
11880450
Dentistry[mh]
Platform switching (PS), in contrast platform matching (PM), initially referred to a restorative protocol that was firstly reported by Lazzara and Porter as a means of limiting circumferential bone loss around dental implants . The concept of platform switching of dental implants to maintain peri-implant bone levels has become increasingly popular in dental implantology . In the 1990s, wide-diameter implants became commercially available. Initially, these implants were restored with standard-diameter abutments instead of wide-diameter abutments due to a lack of matching prosthetic components . Platform switching suggests an abutment or a suprastructure with a diameter at the implant-platform level that is smaller than the implant diameter . This configuration results in a circular horizontal step, which reduces the biologic and mechanical aggressions on the biologic width . One rationale for the beneficial effect of platform switching is to locate the micro-gap of the implant-abutment connection away from the vertical bone-to-implant contact area . Compared with the conventional restorative procedure using an identical implant and suprastructure diameter, PS is suggested to prevent or reduce marginal bone loss (MBL) . Various systematic reviews and meta-analyses have demonstrated a real benefit of PS for patients. Reduced MBL was observed when using PS implant restorations . Twenty-eight publications with a total of 1216 implants with platform switching and 1157 implants with platform matching were included in the meta-analysis by Chrcanovic, Albrektsson and Wennerberg . The results showed less marginal bone loss at implants with platform switching compared to those with platform matching. An increase of the mean difference of MBL between PS and PM groups was observed with the increase in the follow-up time . A recently published systematic review on this topic, including studies with longer follow-up periods between 5 and 10 years, indicated that PS reduced average MBL surrounding implants compared with PM implant-to-abutment connections, favoring the platform-switched approach . Immediate dental implants, which are placed directly after tooth extraction, offer the advantage of reduced treatment time and fewer surgical interventions . However, tooth extraction carries a high risk of bone loss, particularly in the early stages of healing . Studies such as those by Botticelli et al. (2004) and Araújo & Lindhe (2005) have shown significant remodeling of the alveolar ridge post-extraction, leading to both vertical and horizontal bone loss . Therefore, it may be advantageous to use platform switching implants in immediate implantation to prevent potential bone loss. This topic is not well-addressed in the current literature, which is why this study was conceived. The aim is to explore the efficacy of platform switching implants in mitigating bone resorption commonly associated with immediate dental implants, thereby contributing valuable insights to the field and potentially improving clinical outcomes. The goal of the retrospective study is to investigate and compare platform switching and platform matching concerning the extent of marginal bone loss and further clinical signs of periodontal status in immediately inserted dental implants. Study design and patient population In this clinical retrospective study, patients who were treated with endosseous immediate dental implants after tooth extraction at the Clinic for Oral and Maxillofacial Surgery of the University Hospital Gießen, Germany, between the years 2000 and 2023 were included. The patient cohort was assembled after reviewing the surgical plans. The implants must have been under functional load for at least six months. The follow-up examinations took place between June 2023 and February 2024. Study parameters Vertical bone loss The main study parameter is the marginal bone loss measered at the mesial and the distal sides of the implant in both the PS and PM groups. Radiological bone loss was measured using orthopantomograms (OPG). The measurements were performed on a diagnostic monitor using the SIDEXIS XS / 4 Viewer ® (Dentsply Sirona, York, USA) software. First, the presence of peri-implant radiolucency was evaluated and accordingly marked. Subsequently, the calculation of crestal, vertical bone loss around the implant was performed. For this purpose, two radiographs taken at different times were compared. Specifically, the crestal bone level on a postoperative OPG was compared with a current OPG taken at the time of examination. The resultant longitudinal vertical bone loss after implantation was calculated by determining the difference between the current and postoperative crestal bone levels. The measurement described above was performed both mesially and distally on the implant. Implant success In this study, implant success was evaluated according to the criteria established by Buser and Albrektsson, as well as a newly developed criteria by Attia. Buser et al. defined the following criteria in 1990 that a successful implant should meet: No persistent pain, foreign body sensation, or dysesthesia. No recurrent purulent peri-implant infection anamnestically. No mobility of the implant. No continuous peri-implant radiolucency. Possibility of prosthetic restoration (superstructure). An implant was considered a failure if any criterion is not met. Explantation corresponded to a failure. In addition to the success criteria defined by Buser, the criteria established by Albrektsson were also applied. These criteria were: The implant should be clinically stable. No continuous peri-implant radiolucency should be visible. After the first year under load, vertical bone loss should be less than 0.2 mm/year. Absence of signs of infection, pain, paresthesia, or injury to the mandibular canal. At a five-year observation period, the success rate should be 85%. At the end of a ten-year observation period, the success rate should be 80%. Only if all criteria were met an implant was considered successful. Here, too, explantation was considered a failure. Finally, success was evaluated according to the newly developed success score by Attia. This score is divided into four groups: Knockout Criteria : If any of these criteria are met, the implant is defined as unsuccessful and is not subject to further investigation. The presence of these criteria precludes later prosthetic restoration. Implant mobility. Implant fracture. Wrong implant position. 2. Implant-Related Parameters : Each point can receive up to two points in the evaluation. Absence of pain on percussion and palpation. Compared to the time of implantation, the annual peri-implant marginal bone resorption should not exceed the calculated value (“y = 1.5 + 0.2 mm * (x-1); y = allowable bone loss, x = age of the implant in years”). The average probing depth from four sites should be ≤ 4 mm. Absence of pus on probing. Absence of bleeding on probing. 3. Peri-Implant Soft Tissue and Prosthetic Restoration : Each of these points can receive a maximum of one point. Absence of plaque. Absence of complications of prosthetic restoration, such as fracture or debonding. Presence of healthy mucosa. 4. Patient Satisfaction : Each aspect can receive a maximum of one point. Absence of foreign body sensation. Absence of paresthesia. Aesthetic satisfaction. Satisfaction with masticatory function. Satisfaction with speech ability. A maximum score of 18 can be achieved, with a score of 0 indicating failure. Implants with a score of 1–6 are rated as satisfactory, 7–12 as good, and 13–18 as very good. Inclusion and exclusion criteria for study subjects The inclusion criteria initially encompassed all patients who received an immediate implantat after extraction. The implant must have been under functional load for at least six months. During the study period, the following exclusion criteria must not apply to the participating subjects: Must not be undergoing radiation therapy in the head-neck area. Must not be receiving treatment with bisphosphonates. Must not be pregnant. Statistical analyses All statistical tests were carried out using SPSS version 28 (IBM Corp., Armonk, NY, USA) and the R-based software Jamovi version 2.3 (The Jamovi Project, Sydney, Australia). Initially, descriptive statistics using means (µ), standard deviations (SD), frequencies (n), and percentages were used to summarize the demographic and clinical characteristics of the study participants. Then inferential statistics were employed to explore the associations between implant-abutment configuration and patients’ characteristics and clinical outcomes. Chi-squared test (χ 2 ), Fisher’s exact test, Mann-Whitney test (U), and Spearman’s correlation were utilised. To identify predictors of total bone loss (TBL) at mesial and distal sides, multiple linear regression (MLR) was conducted, adjusting for potential confounders. The statistical significance level was set at p. . < 0.05 for all inferential tests. Ethics statement/confirmation of patients’ permission The Ethics committee of the faculty of medicine of Justus-Liebig University Giessen approved the study with the confirmation numbers (126/22). Patients’ consent was obtained from every included patient in the study. In this clinical retrospective study, patients who were treated with endosseous immediate dental implants after tooth extraction at the Clinic for Oral and Maxillofacial Surgery of the University Hospital Gießen, Germany, between the years 2000 and 2023 were included. The patient cohort was assembled after reviewing the surgical plans. The implants must have been under functional load for at least six months. The follow-up examinations took place between June 2023 and February 2024. Vertical bone loss The main study parameter is the marginal bone loss measered at the mesial and the distal sides of the implant in both the PS and PM groups. Radiological bone loss was measured using orthopantomograms (OPG). The measurements were performed on a diagnostic monitor using the SIDEXIS XS / 4 Viewer ® (Dentsply Sirona, York, USA) software. First, the presence of peri-implant radiolucency was evaluated and accordingly marked. Subsequently, the calculation of crestal, vertical bone loss around the implant was performed. For this purpose, two radiographs taken at different times were compared. Specifically, the crestal bone level on a postoperative OPG was compared with a current OPG taken at the time of examination. The resultant longitudinal vertical bone loss after implantation was calculated by determining the difference between the current and postoperative crestal bone levels. The measurement described above was performed both mesially and distally on the implant. Implant success In this study, implant success was evaluated according to the criteria established by Buser and Albrektsson, as well as a newly developed criteria by Attia. Buser et al. defined the following criteria in 1990 that a successful implant should meet: No persistent pain, foreign body sensation, or dysesthesia. No recurrent purulent peri-implant infection anamnestically. No mobility of the implant. No continuous peri-implant radiolucency. Possibility of prosthetic restoration (superstructure). An implant was considered a failure if any criterion is not met. Explantation corresponded to a failure. In addition to the success criteria defined by Buser, the criteria established by Albrektsson were also applied. These criteria were: The implant should be clinically stable. No continuous peri-implant radiolucency should be visible. After the first year under load, vertical bone loss should be less than 0.2 mm/year. Absence of signs of infection, pain, paresthesia, or injury to the mandibular canal. At a five-year observation period, the success rate should be 85%. At the end of a ten-year observation period, the success rate should be 80%. Only if all criteria were met an implant was considered successful. Here, too, explantation was considered a failure. Finally, success was evaluated according to the newly developed success score by Attia. This score is divided into four groups: Knockout Criteria : If any of these criteria are met, the implant is defined as unsuccessful and is not subject to further investigation. The presence of these criteria precludes later prosthetic restoration. Implant mobility. Implant fracture. Wrong implant position. 2. Implant-Related Parameters : Each point can receive up to two points in the evaluation. Absence of pain on percussion and palpation. Compared to the time of implantation, the annual peri-implant marginal bone resorption should not exceed the calculated value (“y = 1.5 + 0.2 mm * (x-1); y = allowable bone loss, x = age of the implant in years”). The average probing depth from four sites should be ≤ 4 mm. Absence of pus on probing. Absence of bleeding on probing. 3. Peri-Implant Soft Tissue and Prosthetic Restoration : Each of these points can receive a maximum of one point. Absence of plaque. Absence of complications of prosthetic restoration, such as fracture or debonding. Presence of healthy mucosa. 4. Patient Satisfaction : Each aspect can receive a maximum of one point. Absence of foreign body sensation. Absence of paresthesia. Aesthetic satisfaction. Satisfaction with masticatory function. Satisfaction with speech ability. A maximum score of 18 can be achieved, with a score of 0 indicating failure. Implants with a score of 1–6 are rated as satisfactory, 7–12 as good, and 13–18 as very good. The main study parameter is the marginal bone loss measered at the mesial and the distal sides of the implant in both the PS and PM groups. Radiological bone loss was measured using orthopantomograms (OPG). The measurements were performed on a diagnostic monitor using the SIDEXIS XS / 4 Viewer ® (Dentsply Sirona, York, USA) software. First, the presence of peri-implant radiolucency was evaluated and accordingly marked. Subsequently, the calculation of crestal, vertical bone loss around the implant was performed. For this purpose, two radiographs taken at different times were compared. Specifically, the crestal bone level on a postoperative OPG was compared with a current OPG taken at the time of examination. The resultant longitudinal vertical bone loss after implantation was calculated by determining the difference between the current and postoperative crestal bone levels. The measurement described above was performed both mesially and distally on the implant. In this study, implant success was evaluated according to the criteria established by Buser and Albrektsson, as well as a newly developed criteria by Attia. Buser et al. defined the following criteria in 1990 that a successful implant should meet: No persistent pain, foreign body sensation, or dysesthesia. No recurrent purulent peri-implant infection anamnestically. No mobility of the implant. No continuous peri-implant radiolucency. Possibility of prosthetic restoration (superstructure). An implant was considered a failure if any criterion is not met. Explantation corresponded to a failure. In addition to the success criteria defined by Buser, the criteria established by Albrektsson were also applied. These criteria were: The implant should be clinically stable. No continuous peri-implant radiolucency should be visible. After the first year under load, vertical bone loss should be less than 0.2 mm/year. Absence of signs of infection, pain, paresthesia, or injury to the mandibular canal. At a five-year observation period, the success rate should be 85%. At the end of a ten-year observation period, the success rate should be 80%. Only if all criteria were met an implant was considered successful. Here, too, explantation was considered a failure. Finally, success was evaluated according to the newly developed success score by Attia. This score is divided into four groups: Knockout Criteria : If any of these criteria are met, the implant is defined as unsuccessful and is not subject to further investigation. The presence of these criteria precludes later prosthetic restoration. Implant mobility. Implant fracture. Wrong implant position. 2. Implant-Related Parameters : Each point can receive up to two points in the evaluation. Absence of pain on percussion and palpation. Compared to the time of implantation, the annual peri-implant marginal bone resorption should not exceed the calculated value (“y = 1.5 + 0.2 mm * (x-1); y = allowable bone loss, x = age of the implant in years”). The average probing depth from four sites should be ≤ 4 mm. Absence of pus on probing. Absence of bleeding on probing. 3. Peri-Implant Soft Tissue and Prosthetic Restoration : Each of these points can receive a maximum of one point. Absence of plaque. Absence of complications of prosthetic restoration, such as fracture or debonding. Presence of healthy mucosa. 4. Patient Satisfaction : Each aspect can receive a maximum of one point. Absence of foreign body sensation. Absence of paresthesia. Aesthetic satisfaction. Satisfaction with masticatory function. Satisfaction with speech ability. A maximum score of 18 can be achieved, with a score of 0 indicating failure. Implants with a score of 1–6 are rated as satisfactory, 7–12 as good, and 13–18 as very good. The inclusion criteria initially encompassed all patients who received an immediate implantat after extraction. The implant must have been under functional load for at least six months. During the study period, the following exclusion criteria must not apply to the participating subjects: Must not be undergoing radiation therapy in the head-neck area. Must not be receiving treatment with bisphosphonates. Must not be pregnant. All statistical tests were carried out using SPSS version 28 (IBM Corp., Armonk, NY, USA) and the R-based software Jamovi version 2.3 (The Jamovi Project, Sydney, Australia). Initially, descriptive statistics using means (µ), standard deviations (SD), frequencies (n), and percentages were used to summarize the demographic and clinical characteristics of the study participants. Then inferential statistics were employed to explore the associations between implant-abutment configuration and patients’ characteristics and clinical outcomes. Chi-squared test (χ 2 ), Fisher’s exact test, Mann-Whitney test (U), and Spearman’s correlation were utilised. To identify predictors of total bone loss (TBL) at mesial and distal sides, multiple linear regression (MLR) was conducted, adjusting for potential confounders. The statistical significance level was set at p. . < 0.05 for all inferential tests. The Ethics committee of the faculty of medicine of Justus-Liebig University Giessen approved the study with the confirmation numbers (126/22). Patients’ consent was obtained from every included patient in the study. Sample characteristics Out of the 37 included patients, 21 (57%) received platform switching implants, while 16 (43%) received platform matching implants (Table ). The majority were male (64.9%), with no significant ( p. . = 0.666) difference between platform switching (61.9%) and platform matching (68.8%) groups. The patients had a mean age at the time of surgery of 46.39 ± 20.86, with no significant difference between the two study groups ( p. . = 0.175). Chronic illnesses and medication usage were present in 47.2% and 40.5% of the patients, respectively, with no significant differences between the study groups ( p. . = 0.765 and p. . = 0.089). Smoking status differed significantly; none of the platform matching patients were smokers, compared to 42.9% in the platform switching group ( p. . = 0.005). Most patients (91.9%) brushed their teeth twice daily, with no significant difference between the groups ( p. . = 0.712). Implants were predominantly placed in the upper jaw (91.9%) in both groups ( p. . = 1.000). Trauma was the main indication for implant placement (89.2%), with no significant difference between groups ( p. . = 0.117), followed by prior implant explanation (5.4%), hypodontia in permanent dentition and extraction of deciduous tooth (2.7%) and root remnant (2.7%). BEGO RI was the most frequently used implant system (83.8%), with no significant differences between the groups ( p = 0.247). The most common implant diameter was 4.50 mm (56.8%), and the lengths 13 mm (40.5%) and 15 mm (37.8%) were prevalent. A significantly higher percentage of 15 mm diameter implants was observed in the platform matching group (62.5%) compared to the platform switching group (19%) ( p = 0.006). Service time was significantly longer for platform matching implants (9.69 ± 4.09 years) versus platform switching implants (3.18 ± 1.69 years) ( p. . < 0.001). Most patients received a VMK single crown (89.2%), with no significant difference between the groups ( p. . = 1.000). Clinical outcomes The plaque index showed no significant difference between the platform switching and platform matching groups ( p. = 0.087), with the majority of patients having a plaque index of 0 (59.5%) (Table ). Mesial probing depths were comparable across both configurations, with a mean of 2.81 mm ( p. = 0.440). Similarly, distal probing depths showed no significant difference between the groups, with a mean depth of 2.64 mm ( p. = 0.937). Vestibular and lingual probing depths were also similar, with means of 2.03 mm and 2.20 mm, respectively, and no significant differences observed ( p. = 0.814 and p. = 0.367). Patient complaints and overall satisfaction did not differ significantly between the groups. The majority of patients reported no complaints (70.3%) and very good overall satisfaction (86.5%). Chewing outcomes were highly rated, with 81.1% of patients in both groups reporting very good function ( p. = 1.000). Speech outcomes were similarly positive, with 81.1% of patients reporting very good results, and no significant difference between the groups ( p. = 0.224). The Buser score indicated a high success rate across both groups, with 97.3% of implants considered successful ( p. = 0.432). The Albrektsson score showed no significant difference between the groups, with 38.1% of implants in the platform switching group and 56.3% in the platform matching group classified as successful ( p. = 0.272) (Fig. ). The Attia score, a continuous measure of implant success, showed no significant difference between the platform switching (16.24 ± 1.04) and platform matching (13.71 ± 6.59) groups ( p. = 0.794). Aesthetic outcomes, assessed by the Pink (Photograph) and Pink (Radiograph) scores, revealed no significant differences between the platform switching and platform matching groups ( p. = 0.811 and p. = 1.000, respectively). Marginal bone loss Mesial bone had a mean baseline level of 2.09 ± 1.22 mm and a mean current level of 2.57 ± 1.02 mm, with a mean total bone loss of 0.47 ± 1.10 mm (Table ). No significant differences were found between the configuration groups in terms of baseline level or total bone loss. However, the current levels were significantly lower in the platform switching group (2.24 ± 0.74 mm) compared to the platform matching group (3.00 ± 1.19 mm, p. = 0.044). Distal bone levels, including baseline, current, and total bone loss, showed no significant differences between the groups ( p. = 0.940 for baseline, p. = 0.728 for current, and p. = 0.774 for total bone loss). No significant associations were found between total bone loss (TBL) and demographic characteristics such as sex ( p. = 0.937 for mesial, p. = 0.649 for distal), age ( p. = 0.914 for mesial, p. = 0.679 for distal), chronic illness ( p. = 0.827 for mesial, p. = 0.639 for distal), medication use ( p. = 0.572 for mesial, p. = 0.237 for distal), smoking status ( p. = 0.433 for mesial, p. = 0.664 for distal), and toothbrushing habits ( p. = 0.911 for mesial, p. = 0.570 for distal). These results indicate that these patient characteristics did not significantly impact bone loss outcomes (Table ). Additional factors, including jaw location ( p. = 0.814 for both mesial and distal), indication for implant placement ( p. = 0.192 for mesial, p. = 0.588 for distal), implant system ( p. = 0.277 for mesial, p. = 0.913 for distal), implant diameter ( p. = 0.423 for mesial, p. = 0.169 for distal), implant length ( p. = 0.403 for mesial, p. = 0.748 for distal), service time ( p. = 0.845 for mesial, p. = 0.744 for distal), and superstructure type ( p. = 0.886 for mesial, p. = 0.854 for distal), also showed no significant association with TBL. Regression models of marginal bone loss Multiple linear regression analysis for mesial TBL demonstrated a good fit with an R² value of 0.521 (Table ). The beta coefficient for the platform matching group was positive (β = 2.11, 95% CI: 0.11–4.11, p. = 0.039) compared to the platform switching group, indicating increased mesial bone loss with platform matching. This association remained significant after adjusting for service time. Males exhibited less mesial bone loss compared to females (β = -0.60, 95% CI: -1.74–0.55, p. = 0.286). Patients’ age at operation had a negligible effect (β = 0.00, 95% CI: -0.04–0.05, p. = 0.832). Chronic illness was associated with increased mesial TBL (β = 2.15, 95% CI: -0.12–4.42, p. = 0.062), while medication use reduced bone loss (β = -2.00, 95% CI: -4.33–0.33, p. = 0.088). Lower jaw had less bone loss, but this was not statistically significant (β = -0.54, 95% CI: -5.00–3.92, p. = 0.801). Additionally, a larger implant diameter tended to decrease mesial bone loss, although this association was not statistically significant (β = -2.08, 95% CI: -4.39–0.23, p. = 0.074). The regression analysis for distal TBL also showed a good fit with an R² value of 0.440. The beta coefficient for the platform matching group was positive (β = 0.88, 95% CI: -1.37–3.12, p. = 0.422) compared to the platform switching group, indicating increased distal bone loss with platform matching. Males experienced less distal bone loss (β = -0.67, 95% CI: -1.96–0.61, p. = 0.285), while patients’ age at operation had minimal impact (β = -0.04, 95% CI: -0.09–0.02, p. = 0.186). Chronic illness was linked to increased distal TBL (β = 2.38, 95% CI: -0.17–4.94, p. = 0.065), whereas medication use was associated with decreased bone loss (β = -1.32, 95% CI: -3.94–1.30, p. = 0.303). Lower jaw also had less bone loss, but this was not statistically significant (β = -0.11, 95% CI: -5.13–4.90, p. = 0.962). Similarly, a larger implant diameter was associated with decreased distal bone loss, but this was not statistically significant (β = -1.13, 95% CI: -3.72–1.47, p. = 0.371). Out of the 37 included patients, 21 (57%) received platform switching implants, while 16 (43%) received platform matching implants (Table ). The majority were male (64.9%), with no significant ( p. . = 0.666) difference between platform switching (61.9%) and platform matching (68.8%) groups. The patients had a mean age at the time of surgery of 46.39 ± 20.86, with no significant difference between the two study groups ( p. . = 0.175). Chronic illnesses and medication usage were present in 47.2% and 40.5% of the patients, respectively, with no significant differences between the study groups ( p. . = 0.765 and p. . = 0.089). Smoking status differed significantly; none of the platform matching patients were smokers, compared to 42.9% in the platform switching group ( p. . = 0.005). Most patients (91.9%) brushed their teeth twice daily, with no significant difference between the groups ( p. . = 0.712). Implants were predominantly placed in the upper jaw (91.9%) in both groups ( p. . = 1.000). Trauma was the main indication for implant placement (89.2%), with no significant difference between groups ( p. . = 0.117), followed by prior implant explanation (5.4%), hypodontia in permanent dentition and extraction of deciduous tooth (2.7%) and root remnant (2.7%). BEGO RI was the most frequently used implant system (83.8%), with no significant differences between the groups ( p = 0.247). The most common implant diameter was 4.50 mm (56.8%), and the lengths 13 mm (40.5%) and 15 mm (37.8%) were prevalent. A significantly higher percentage of 15 mm diameter implants was observed in the platform matching group (62.5%) compared to the platform switching group (19%) ( p = 0.006). Service time was significantly longer for platform matching implants (9.69 ± 4.09 years) versus platform switching implants (3.18 ± 1.69 years) ( p. . < 0.001). Most patients received a VMK single crown (89.2%), with no significant difference between the groups ( p. . = 1.000). The plaque index showed no significant difference between the platform switching and platform matching groups ( p. = 0.087), with the majority of patients having a plaque index of 0 (59.5%) (Table ). Mesial probing depths were comparable across both configurations, with a mean of 2.81 mm ( p. = 0.440). Similarly, distal probing depths showed no significant difference between the groups, with a mean depth of 2.64 mm ( p. = 0.937). Vestibular and lingual probing depths were also similar, with means of 2.03 mm and 2.20 mm, respectively, and no significant differences observed ( p. = 0.814 and p. = 0.367). Patient complaints and overall satisfaction did not differ significantly between the groups. The majority of patients reported no complaints (70.3%) and very good overall satisfaction (86.5%). Chewing outcomes were highly rated, with 81.1% of patients in both groups reporting very good function ( p. = 1.000). Speech outcomes were similarly positive, with 81.1% of patients reporting very good results, and no significant difference between the groups ( p. = 0.224). The Buser score indicated a high success rate across both groups, with 97.3% of implants considered successful ( p. = 0.432). The Albrektsson score showed no significant difference between the groups, with 38.1% of implants in the platform switching group and 56.3% in the platform matching group classified as successful ( p. = 0.272) (Fig. ). The Attia score, a continuous measure of implant success, showed no significant difference between the platform switching (16.24 ± 1.04) and platform matching (13.71 ± 6.59) groups ( p. = 0.794). Aesthetic outcomes, assessed by the Pink (Photograph) and Pink (Radiograph) scores, revealed no significant differences between the platform switching and platform matching groups ( p. = 0.811 and p. = 1.000, respectively). Mesial bone had a mean baseline level of 2.09 ± 1.22 mm and a mean current level of 2.57 ± 1.02 mm, with a mean total bone loss of 0.47 ± 1.10 mm (Table ). No significant differences were found between the configuration groups in terms of baseline level or total bone loss. However, the current levels were significantly lower in the platform switching group (2.24 ± 0.74 mm) compared to the platform matching group (3.00 ± 1.19 mm, p. = 0.044). Distal bone levels, including baseline, current, and total bone loss, showed no significant differences between the groups ( p. = 0.940 for baseline, p. = 0.728 for current, and p. = 0.774 for total bone loss). No significant associations were found between total bone loss (TBL) and demographic characteristics such as sex ( p. = 0.937 for mesial, p. = 0.649 for distal), age ( p. = 0.914 for mesial, p. = 0.679 for distal), chronic illness ( p. = 0.827 for mesial, p. = 0.639 for distal), medication use ( p. = 0.572 for mesial, p. = 0.237 for distal), smoking status ( p. = 0.433 for mesial, p. = 0.664 for distal), and toothbrushing habits ( p. = 0.911 for mesial, p. = 0.570 for distal). These results indicate that these patient characteristics did not significantly impact bone loss outcomes (Table ). Additional factors, including jaw location ( p. = 0.814 for both mesial and distal), indication for implant placement ( p. = 0.192 for mesial, p. = 0.588 for distal), implant system ( p. = 0.277 for mesial, p. = 0.913 for distal), implant diameter ( p. = 0.423 for mesial, p. = 0.169 for distal), implant length ( p. = 0.403 for mesial, p. = 0.748 for distal), service time ( p. = 0.845 for mesial, p. = 0.744 for distal), and superstructure type ( p. = 0.886 for mesial, p. = 0.854 for distal), also showed no significant association with TBL. Multiple linear regression analysis for mesial TBL demonstrated a good fit with an R² value of 0.521 (Table ). The beta coefficient for the platform matching group was positive (β = 2.11, 95% CI: 0.11–4.11, p. = 0.039) compared to the platform switching group, indicating increased mesial bone loss with platform matching. This association remained significant after adjusting for service time. Males exhibited less mesial bone loss compared to females (β = -0.60, 95% CI: -1.74–0.55, p. = 0.286). Patients’ age at operation had a negligible effect (β = 0.00, 95% CI: -0.04–0.05, p. = 0.832). Chronic illness was associated with increased mesial TBL (β = 2.15, 95% CI: -0.12–4.42, p. = 0.062), while medication use reduced bone loss (β = -2.00, 95% CI: -4.33–0.33, p. = 0.088). Lower jaw had less bone loss, but this was not statistically significant (β = -0.54, 95% CI: -5.00–3.92, p. = 0.801). Additionally, a larger implant diameter tended to decrease mesial bone loss, although this association was not statistically significant (β = -2.08, 95% CI: -4.39–0.23, p. = 0.074). The regression analysis for distal TBL also showed a good fit with an R² value of 0.440. The beta coefficient for the platform matching group was positive (β = 0.88, 95% CI: -1.37–3.12, p. = 0.422) compared to the platform switching group, indicating increased distal bone loss with platform matching. Males experienced less distal bone loss (β = -0.67, 95% CI: -1.96–0.61, p. = 0.285), while patients’ age at operation had minimal impact (β = -0.04, 95% CI: -0.09–0.02, p. = 0.186). Chronic illness was linked to increased distal TBL (β = 2.38, 95% CI: -0.17–4.94, p. = 0.065), whereas medication use was associated with decreased bone loss (β = -1.32, 95% CI: -3.94–1.30, p. = 0.303). Lower jaw also had less bone loss, but this was not statistically significant (β = -0.11, 95% CI: -5.13–4.90, p. = 0.962). Similarly, a larger implant diameter was associated with decreased distal bone loss, but this was not statistically significant (β = -1.13, 95% CI: -3.72–1.47, p. = 0.371). The goal of this study was to compare two systems of immediately inserted dental implants, specifically focusing on the differences between platform-switching (PS) and platform-matching (PM) implant-abutment connections. There are very few studies on this topic, and the follow-up times and parameters of each study vary . The follow-up periods for studies focusing on immediately placed dental implants range from 12 months to 10 years . In this study, the follow-up duration varied between 6 months and 23 years. One of the most important factors for assessing implant success is marginal bone loss (MBL), which is a parameter reported in all studies on this topic. In a study by Pieri et al., MBL in the PS group was 0.2 ± 0.17 mm, and in the PM group, it was 0.51 ± 0.24 mm at a 12 months follow-up . Crespi et al. reported no statistical difference between groups and with a MBL of 0.78 ± 0.49 mm for the PS group and 0.73 ± 0.52 mm for the control group after 24 months . In a five-year observation by Slagter et al., MBL was reported as 0.71 ± 0.68 mm mesially and 0.71 ± 0.71 mm distally in PS implants, with the greatest occuring within the first month after implant placement . Beschnidt et al., in a five-year follow-up of non-immediate implants placed in private practices, found no significant differences in mean bone level changes between PS and PM implants (− 0.32 ± 0.60 mm vs. −0.13 ± 0.29 mm), highlighting their comparable performance in clinical practice . Canullo et al., in ten-year follow-up, found a significantly lower MBL in the PS group (0.18 ± 0.14 mm) compared to the control group (0.80 ± 0.40 mm) . In the current study, MBL was 0.47 ± 1.13 mm for the PS group and 0.64 ± 1.13 mm for PM group. The difference between MBL for platform switching and platform matching was not statistically significant in this study, similar to the findings of Crespi et al. . However, studies by Pieri and Canullo showed a significant advantage for PS implants ( p. = 0.0004 and p. = 0.00108 respectively), with lower MBL values . Additionally, this study’s regression analysis indicated significantly lower mesial MPL in PS implants compared to PM implants, reinforcing previous research in favor of PS. These findings are consistent with a recently published systematic review by Vaghela et al., which concluded that the use of PS implants in immediate placement protocols can lead to a reduction in marginal bone loss compared to PM implants. However, the authors emphasized that these results should be interpreted with caution due to small sample sizes, substantial methodological variability in patients’ selection criteria and high risk of bias among the included studies . Overall, while MBL values vary across studies, there is a slight trend favoring platform-switching implants in terms of bone preservation. Additionally, the condition of the soft tissues around the implants should be evaluated. Probing depth (PD) is a useful measure for this purpose. When comparing the mesial, distal, vestibular and lingual PD values, no statistically significant differences were observed between the test and control groups. After averaging the four measured PD values, the mean PD can be compared with other studies. In this study, the mean PD for the PS group was 2.47 ± 0.13 mm, and for the PM group, it was 2.36 ± 0.15 mm. In comparison, Pieri’s study reported a PD of 2.58 ± 0.49 mm for the PS group and 2.71 ± 0.48 mm for the control group . Overall, PD values did not show significant differences between studies or between study groups. Inflammatory changes, implant stability and vertical bone loss were monitored in this study, along with the overall success of the implant using the implant score according to Buser and Albrektsson, as well as the newly published Attia score . The Attia score, in particular, offers a more nuanced evaluation by incorporating not only clinical and radiological parameters but also patient-centered outcomes, such as satisfaction with chewing function, speech ability, and aesthetic appearance. This multi-dimensional approach provides a more comprehensive understanding of implant success, considering both biological performance and patient quality of life. By combining these different scoring systems, this study aimed to provide a robust evaluation of implant outcomes over varying follow-up periods, enhancing our understanding of long-term implant success and patient satisfaction. However, future studies should assess the reproducibility and reliability of the Attia score in different populations, especially with longer-term follow-up data. Although this retrospective study did not find a statistically significant difference between the PS and PM implant groups in the monitored factors, it provides a long-term picture of the functionality of both types of implants. Given that after 23 years both implant forms remain functional and no serious periodontal problems were observed, implantologists may be more inclined toward the PS form of the immediate implantation due to its potential advantages in preserving marginal bone levels and soft tissue stability. As a limiting factor of this study, it is necessary to mention the smoking status of the patients. Smokers were included only in the PS group, representing 42.9% of the participants in this group. It is well known that smokers have a higher risk of implant failure and higher MBL values compared to non-smokers . In this study, only the test group, and not the control group, was affected by this factor. Pieri reported that the number of smokers in their study was 7.5% (one participant in the test group and two in the control group). Other studies do not provide exact data on the proportion of smokers, making it difficult to compare and draw definitive conclusions based on this factor . In the context of this study, it can be stated that despite the negative impact of smoking, the PS group exhibited slightly better values than the PM group. To eliminate this bias in future studies, smokers should either be evenly distributed between study groups or excluded altogether to ensure a more balanced comparison. Future research should focus on conducting prospective, randomized controlled trials with larger sample sizes to confirm the long-term advantages of platform switching implants, especially in smokers or patients with other risk factors that could influence implant success. Additionally, further studies should explore the biological mechanisms behind the differences in marginal bone loss between PS and PM implants, particularly in various clinical scenarios such as immediate versus delayed implantation, different bone qualities, and varying patient health conditions. Furthermore, integrating more advanced imaging techniques and biomarkers to track peri-implant tissue changes over time could help provide deeper insights into how platform switching affects the peri-implant environment. Finally, as patient satisfaction is becoming increasingly important in implantology, future studies should also explore how PS and PM implants influence long-term aesthetic outcomes and patient-reported quality of life, offering a more holistic understanding of implant success. This study compared the long-term performance of immediately placed platform switching (PS) and platform matching (PM) dental implants. Both systems showed stable clinical outcomes and high patient satisfaction over follow-up periods ranging from 6 months to 23 years, with no significant differences in probing depths, plaque index, or overall implant success. However, PS implants demonstrated significantly lower current mesial bone levels compared to PM implants, and regression analysis indicated greater mesial bone loss with PM implants. Despite both systems being viable options, platform switching may offer advantages in preserving peri-implant bone. Further prospective studies are needed to confirm these findings and explore the long-term benefits of PS implants, especially in relation to bone preservation in immediately placed dental implants. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2
Biochar is colonized by select arbuscular mycorrhizal fungi in agricultural soils
c210e511-a410-4905-8019-d448157a6315
11166811
Microbiology[mh]
Arbuscular mycorrhizal fungi (AMF) colonize most terrestrial plants, including most domesticated crops (Smith and Read ). AMF form a nutritional symbiosis wherein the plant provides photosynthates and acts as the sole carbon source for the fungus, in exchange for soil-derived mineral nutrients furnished by the fungi (Parniske ). These fungi expand the volume of soil which plants may explore for nutrients, mainly phosphorus and nitrogen to a lesser extent, to trade to their host plant. Under low nutrient conditions, mycorrhizal plants may transfer an elevated proportion of carbon resources to symbiotic AMF to promote AMF phosphorus acquisition, a more resource efficient strategy than constructing additional roots or root hairs (Andrino et al. ). Sustainable agricultural management practices – such as no-till and reduced fertilization – can increase AMF abundance in soils by allowing for expanding mycelial networks and promoting root colonization (Fitter et al. ). In addition, integration of AMF into cropping systems also can improve: salt-tolerance (Saxena et al. ), drought tolerance (Tang et al. ), and disease resistance of host plants (Song et al. ), thereby improving resilience in the face of climate change. Sustainable cropping systems, promoting AMF, and growing plants tolerant to adverse conditions can help us produce food in increasingly adverse environments. Quinoa ( Chenopodium quinoa Willd.) is a seed-crop of increasing importance in North America as it is salt, drought, and frost tolerant (Ruiz et al. ). Increased interest in growing quinoa grain in Canada has fueled recent agronomic research into optimal growing conditions at Northern latitudes, although no work has been conducted on quinoa AMF in Canada (Nurse et al. ). While quinoa has previously been categorized as non-mycorrhizal, a number of recent studies have shown AMF colonization of quinoa roots, although there is reduction of AMF following quinoa rotations and colonization rates have been reported as diminished in subsequent crops (Urcelay et al. ; Wieme et al. ; Kellogg et al. ). Furthermore, AMF inoculation of quinoa plants has been shown to improve physiological markers (such as chlorophyll content and vegetative growth), improve response to stress, and improve soil health following harvest, in comparison to non-inoculated quinoa plants (Benaffari et al. ). Quinoa is deeply rooted, however, and in many studies symbiotic association of AMF with quinoa has been variable, with negligible colonization or colonization rates lower than for other economically important crops (Urcelay et al. ). Colonization with AMF in this plant family may follow a pathogenic root response including phytoalexin production which could nevertheless prime the plant, resulting in some of the benefits observed (Yactayo-Chang et al. ). Research into quinoa mycorrhizae is still in its infancy, and further research into AMF associating with quinoa may be beneficial to meeting the needs of this growing market. Biochar benefits soil health through increasing nutrient availability as well as improving water retention and soil structure, through decreased bulk density (Lehmann et al. ; Palansooriya et al. ). Depending upon the physical characteristics of the biochar, it can have two important overall applications to agriculture: improving crop yield and increasing soil organic matter content. Many studies suggest positive effects of biochar on yield in meta-analyses, with application rates ranging from 5 to 20 Mg ha-1, although lower application rates can be used in combination with fertilization (< 1 Mg ha-1) (Joseph et al. ). Application of biochar has improved quinoa yields under both drought (Yang et al. ) and salinity-stressed conditions (Abbas et al. ), possibly by adsorbing excess Na + and improving water retention. In addition, the agronomic importance of biochar can be supplemented through its ability to sequester carbon in arable soils, with an ability to reduce net greenhouse gas emissions by 1.8 Pg CO2-C annually without sacrificing food security (Woolf et al. ). Reduced net greenhouse gas emissions through biochar addition operates through increased methanotrophy, reduced N 2 O emissions as well as through the storage of recalcitrant carbon within biochar itself (Woolf et al. ). Repeated applications of biochar in arable soils builds soil organic carbon (SOC) stocks, whereas other unpyrolyzed carbon additions would be regularly decomposed, resulting in carbon mineralization (Joseph et al. ). Therefore biochar amendment is a promising avenue to improve crop yields while promoting sustainable agriculture and environmental benefits. The capacity of AMF to colonize soil and biochar likely is influenced by nutrient availability and soil structure (George et al. ). Because of biochar’s ability to act as a phosphorus source (Glaser and Lehr ) as well as its heterogeneous structure, it is likely to represent a distinctive microhabitat for AMF. Indeed, previous research has shown enrichment of AMF in response to biochar addition, which authors posit may be attributable to the physical properties of biochar (Jin ). Arbuscular mycorrhizal fungi have been shown to colonize biochar, their hyphae penetrating small (< 10 μm) micropores and translocating nutrients (Hammer et al. ). Plants colonized by a single strain of arbuscular mycorrhizal fungus and supplemented with biochar have been reported to gain a productivity boost (Hammer et al. ). Soils, however, contain a mix of many different AMF with plants becoming colonized by a consortium of fungi, usually representative of the available inoculum in soils. Agricultural soils are also often amended with nutrients, both organic and inorganic as well as biochar. A number of mechanisms have been proposed to explain the increased relative abundance of AMF in soils amended with biochar, including changing the nutrient profiles of soil, altering AMF-microorganism interactions, altering AMF-plant interactions, and providing refugia for colonizing AMF (Warnock et al. ). Here, we conducted a greenhouse experiment wherein we buried packets of biochar in root-exclusion mesh bags to assess AMF hyphal colonization in arable soils from multiple locations, and we amended those soils with manure or fertilizer using quinoa as the host plant. To our knowledge, an examination of the colonization of biochar by naturally-occurring AMF from contrasting soils with different amendments, has not been done. We hypothesize that AMF diversity and community composition in pure biochar will differ from the surrounding soil because of differences in nutrient availability, chemical composition, and structure between the two. Soils and amendment characterization The top 15 cm layer of soils was collected in May 2018 from four long-term cropping field sites across Alberta, Canada: (i) a Dark Gray Luvisol from Beaverlodge (55°12'01"N, 119°23'51"W), (ii) an Orthic Brown Chernozem from Vauxhall (50°04'11"N, 112°0529"W), (iii) an Orthic Black Chernozem from Olds (51°43'46"N, 113°57'42"W), and (iv) an Orthic Brown Chernozem from Cranford (49°45'51"N, 112°20'31"W). All soils were deep and well-drained, and derived from glaciofluvial or glaciolacustrine deposits (Alberta ) and cropped to wheat ( Triticum aestivum L.) in Beaverlodge, potato ( Solanum tuberosum L.) in Vauxhall and Cranford, and barley ( Hordeum vulgare L.) in Olds. Biochar was produced from pinewood ( Pinus spp.) utilizing Engineered Biocarbon™ technology, i.e., a front-end biomass pyrolysis (< 650 °C) followed by a patented post-pyrolysis treatment step (Cool Planet Energy Systems, Inc., Greenwood Village, CO). The material was characterized by a surface area of 152 m 2 g − 1 , an ash content of 1.7%, a bulk density of 122 kg m − 3 (dry mass basis), and a volatile matter content of 25.4% (dry mass basis) (InnoTech Alberta Inc., Vegreville, AB). Manure was collected from cattle housed in a tie-stall barn. The material contained an average water content of 77–79% and resulted from a diet of 60% barley silage, 35% barley grain, and 5% standard supplement. Selected soil, biochar, and manure chemical properties are presented in Table . Experimental design A greenhouse experiment was conducted at the Lethbridge Research and Development Centre of Agriculture and Agri-Food Canada (Lethbridge, AB). Each nursery pot (4-L) was filled with 3 kg of air-dried, sieved soil (< 2 mm). Amendments were manually applied at a rate of 3.0 Mg ha − 1 (biochar), 200.0 Mg ha − 1 and 3.0 Mg ha − 1 (manure + biochar, respectively), and 150 kg N ha − 1 [(NH 4 ) 2 SO 4 ], 50 kg P ha − 1 (KH 2 PO 4 ) and 3.0 Mg ha − 1 (fertilizer-NP + biochar, respectively) generating four experimental treatments for each soil type, i.e., un-amended control (C), biochar (B), biochar + manure (B + M), biochar + NP-fertilizer (B + F). Six nylon-sealed (35 μm mesh permitting AMF hyphal penetration (Friese and Allen ; Hempel et al. ; Błaszkowski et al. ) biochar packets (1.5 g, 3 × 3 cm) were buried at a depth of 5 cm inside each pot (except for C). Four replicate pots were prepared for each soil type x treatment combination and randomly arranged in the greenhouse. Eight seeds of quinoa cv. NQ94PT were sown in each pot on July 11, 2018. Plant density was reduced to four per pot two weeks after emergence. All pots were irrigated with distilled water during the experiment. The greenhouse was kept at 19 °C ± 0.5 for the duration of the experiment, with no added light. Quinoa was harvested on November 26, 2018, biomass was harvested, seed weight was recorded, and fresh soil from each pot was homogenized and sub-sampled for chemical analysis (50 g) or DNA extraction (5 g). Biochar was retrieved from packets and homogenized. DNA was extracted from soil and biochar stored at -20 °C and analyzed within a month of sample collection. Soil chemical analysis Soil pH and EC were determined using a 2:1 (water: soil) slurry. Olsen P was determined by extracting 2.5 g of air-dried soil with 25 mL of 0.5 M NaHCO 3 (Olsen et al. ). Concentrations were quantified by colorimetry with a discrete analyzer (EasyChem Pro, Systea Analytical Technology, Anagni, Italy). Water-extractable organic C [(WEOC); mg C kg − 1 ] and water-extractable total N [(WETN); mg N kg − 1 ] were quantified in syringe-filtered 15 mL aliquots (< 0.45 μm) using a TC and TN combustion analyzer (TOC-V CSH and TNM-1 Shimadzu Corp., Kyoto, Japan) following the procedure of (Chantigny et al., 1999). A sub-sample of air-dried soil (< 2 mm) was ball-milled (< 0.15 mm) and used to determine total C (TC), total nitrogen (TN), 15 N/ 14 N isotope ratios (δ 15 N‰), and 13 C/ 12 C isotope ratios (δ 13 C‰) by dry combustion using a CN analyzer (NC2100, Carlo Erba Instruments, Milan, Italy) coupled with an Optima mass spectrometer (Micromass, Manchester, UK). NH 4 + -N and NO 3 -N were determined by extracting 5 g of soil with 25 mL of 2 M KCl and quantified by the modified indophenol blue technique (Sims et al., 1995) using a microplate spectrophotometer at 650 nm (Multiskan GO, Thermo Fisher Scientific, Waltham, MA). DNA extraction and sequencing DNA was extracted from soil and biochar by using the Qiagen Powerlyzer Powersoil DNA extraction kit as per manufacturer protocols, combined with bead beating using a MP Biomedical Fast Prep Bead Beater (MP Biomedicals, Ohio, USA). DNA purity was confirmed using a Biodrop spectrophotometer and DNA concentrations determined using a Qubit v4 fluorometer (ThermoFisher Scientific, Massachusetts, USA). All samples were checked to ensure amplification using the AMF SSU primer pair NS31 (5’-TTGGAGGGCAAGTCTGGTGCC-3’) and AML2 (5’-GAACCCAAACACTTTGGTTTCC-3’) with the conditions outlined in Morgan and Egerton-Warburton . Libraries were prepared and sequencing was performed by Genome Quebec using an Illumina MiSeq with V3 chemistry at 2 × 300 bp paired-end (PE) configuration (Illumina, San Diego, California, USA). Each PCR was conducted in a 7 µL reaction: 1X PCR Buffer with 18mM MgCl 2 (Roche), 5% DMSO (Roche), 0.2 mM dNTP mix (NEB), 0.02 U/µL FastStart High Fi (Roche), 0.5 µM NS31 primer, 0.5 µM AML2 primer, 1 µL of 10-fold diluted DNA template and molecular grade water. Thermocycler conditions were as follows: denaturation at 94 °C for 3 min; 35 cycles of 94 °C for 45 s, 63 °C for 60 s, 72 °C for 90 s; and a final extension of 72 °C for 10 min. Sequence processing was performed in QIIME2 with Dada2 (Bolyen et al. ). 5’ ends of the forward reads were trimmed at 21 bp, and the 3’ ends of the reverse reads were trimmed at 22 bp, corresponding to a median QC over 20. Forward reads were then truncated to a max of 295 bp and reverse reads were truncated to a max of 283 bp. Adaptor sequences were removed using filterANDtrim. Sequences were then dereplicated, chimeras were removed, and amplicon sequence variants (ASVs) were resolved with Dada2 in QIIME2. Taxonomy was assigned using BLAST + with a trained MAARJAM v2 database to derive virtual taxon assignments (VTX) (Opik et al. ). Due to the high number of unknown assignments using VTXs, α-diversity and β-diversity measures were calculated using the original ASVs. Sequences were not rarefied. Prior to quality filtering, read counts ranged from 13,776 to 263,566 reads per sample for a total of 15,253,943 reads. After merging reads, 3,659,625 reads passed quality filtering (23% of the total read count). The average read count in the soil samples was 3,121 reads (with a maximum of 21,582 reads) and the average read count in the biochar samples was 1,367 (with a maximum of 22,085 reads). A total of 2886 AMF ASVs were identified. All sequences identified matched to Glomeromycota, with 88.7% of these supporting classification past the order level. Statistical analysis Statistical analysis was performed in R v3.6.4 (R Development Core Team, 2008). Diversity analyses were conducted in the Phyloseq R package and included α-diversity (richness, Pielou’s evenness, and Shannon diversity) and β-diversity metrics, including Bray-Curtis distance matrices (McMurdie and Holmes ). Normalization of ASV counts for β-diversity was undertaken using a variance stabilizing transformation implemented in the DeSeq2 R package (Love et al. ). The β-diversity metrics were visualized in non-metric multidimensional scaling (NMDS) and dbRDA plots in vegan (Oksanen et al. ). Soil chemical properties were used as explanatory variables to determine their effects on AMF community composition. Forward selection of environmental variables was used to reduce multicollinearity of the model. The vegan R package was used to determine the significance of β-diversity differences with PERMANOVAs and correlations with environmental variables using the mantel test (Oksanen et al. ). Count data were not normalized for α-diversity or relative abundance analysis. Linear mixed models were constructed (LMM; Method = REML) for comparisons of α-diversity between soil sites (Beaverlodge, Vauxhall, Olds, and Cranford; within either bulk soil or biochar packets), between nutrient amendments (F, B + M, and B + F; within either bulk soil or biochar packets), and between Bulk Soil vs. Biochar packets themselves. The homogeneity of variance, normality and outliers of the residuals were assessed in DHARMa. Data were square-root transformed where appropriate to correct for non-normality and heteroscedasticity in the R package MASS. Significant differences in α-diversity were assessed using a two-way ANOVA with a Tukey post-hoc test when appropriate in R. Relative abundance tables were made using the Phyloseq package in R and stacked bar graphs were created using ggplot2 (Wickham ). All β-diversity, α-diversity, relative abundances, and correlation analyses were performed using ASV data. Spearman correlations between α-diversity metrics and environmental parameters were conducted in corrplot v0.92. Multiple comparison corrections were performed using a Benjamini-Hochberg multiple comparisons method. The top 15 cm layer of soils was collected in May 2018 from four long-term cropping field sites across Alberta, Canada: (i) a Dark Gray Luvisol from Beaverlodge (55°12'01"N, 119°23'51"W), (ii) an Orthic Brown Chernozem from Vauxhall (50°04'11"N, 112°0529"W), (iii) an Orthic Black Chernozem from Olds (51°43'46"N, 113°57'42"W), and (iv) an Orthic Brown Chernozem from Cranford (49°45'51"N, 112°20'31"W). All soils were deep and well-drained, and derived from glaciofluvial or glaciolacustrine deposits (Alberta ) and cropped to wheat ( Triticum aestivum L.) in Beaverlodge, potato ( Solanum tuberosum L.) in Vauxhall and Cranford, and barley ( Hordeum vulgare L.) in Olds. Biochar was produced from pinewood ( Pinus spp.) utilizing Engineered Biocarbon™ technology, i.e., a front-end biomass pyrolysis (< 650 °C) followed by a patented post-pyrolysis treatment step (Cool Planet Energy Systems, Inc., Greenwood Village, CO). The material was characterized by a surface area of 152 m 2 g − 1 , an ash content of 1.7%, a bulk density of 122 kg m − 3 (dry mass basis), and a volatile matter content of 25.4% (dry mass basis) (InnoTech Alberta Inc., Vegreville, AB). Manure was collected from cattle housed in a tie-stall barn. The material contained an average water content of 77–79% and resulted from a diet of 60% barley silage, 35% barley grain, and 5% standard supplement. Selected soil, biochar, and manure chemical properties are presented in Table . A greenhouse experiment was conducted at the Lethbridge Research and Development Centre of Agriculture and Agri-Food Canada (Lethbridge, AB). Each nursery pot (4-L) was filled with 3 kg of air-dried, sieved soil (< 2 mm). Amendments were manually applied at a rate of 3.0 Mg ha − 1 (biochar), 200.0 Mg ha − 1 and 3.0 Mg ha − 1 (manure + biochar, respectively), and 150 kg N ha − 1 [(NH 4 ) 2 SO 4 ], 50 kg P ha − 1 (KH 2 PO 4 ) and 3.0 Mg ha − 1 (fertilizer-NP + biochar, respectively) generating four experimental treatments for each soil type, i.e., un-amended control (C), biochar (B), biochar + manure (B + M), biochar + NP-fertilizer (B + F). Six nylon-sealed (35 μm mesh permitting AMF hyphal penetration (Friese and Allen ; Hempel et al. ; Błaszkowski et al. ) biochar packets (1.5 g, 3 × 3 cm) were buried at a depth of 5 cm inside each pot (except for C). Four replicate pots were prepared for each soil type x treatment combination and randomly arranged in the greenhouse. Eight seeds of quinoa cv. NQ94PT were sown in each pot on July 11, 2018. Plant density was reduced to four per pot two weeks after emergence. All pots were irrigated with distilled water during the experiment. The greenhouse was kept at 19 °C ± 0.5 for the duration of the experiment, with no added light. Quinoa was harvested on November 26, 2018, biomass was harvested, seed weight was recorded, and fresh soil from each pot was homogenized and sub-sampled for chemical analysis (50 g) or DNA extraction (5 g). Biochar was retrieved from packets and homogenized. DNA was extracted from soil and biochar stored at -20 °C and analyzed within a month of sample collection. Soil pH and EC were determined using a 2:1 (water: soil) slurry. Olsen P was determined by extracting 2.5 g of air-dried soil with 25 mL of 0.5 M NaHCO 3 (Olsen et al. ). Concentrations were quantified by colorimetry with a discrete analyzer (EasyChem Pro, Systea Analytical Technology, Anagni, Italy). Water-extractable organic C [(WEOC); mg C kg − 1 ] and water-extractable total N [(WETN); mg N kg − 1 ] were quantified in syringe-filtered 15 mL aliquots (< 0.45 μm) using a TC and TN combustion analyzer (TOC-V CSH and TNM-1 Shimadzu Corp., Kyoto, Japan) following the procedure of (Chantigny et al., 1999). A sub-sample of air-dried soil (< 2 mm) was ball-milled (< 0.15 mm) and used to determine total C (TC), total nitrogen (TN), 15 N/ 14 N isotope ratios (δ 15 N‰), and 13 C/ 12 C isotope ratios (δ 13 C‰) by dry combustion using a CN analyzer (NC2100, Carlo Erba Instruments, Milan, Italy) coupled with an Optima mass spectrometer (Micromass, Manchester, UK). NH 4 + -N and NO 3 -N were determined by extracting 5 g of soil with 25 mL of 2 M KCl and quantified by the modified indophenol blue technique (Sims et al., 1995) using a microplate spectrophotometer at 650 nm (Multiskan GO, Thermo Fisher Scientific, Waltham, MA). DNA was extracted from soil and biochar by using the Qiagen Powerlyzer Powersoil DNA extraction kit as per manufacturer protocols, combined with bead beating using a MP Biomedical Fast Prep Bead Beater (MP Biomedicals, Ohio, USA). DNA purity was confirmed using a Biodrop spectrophotometer and DNA concentrations determined using a Qubit v4 fluorometer (ThermoFisher Scientific, Massachusetts, USA). All samples were checked to ensure amplification using the AMF SSU primer pair NS31 (5’-TTGGAGGGCAAGTCTGGTGCC-3’) and AML2 (5’-GAACCCAAACACTTTGGTTTCC-3’) with the conditions outlined in Morgan and Egerton-Warburton . Libraries were prepared and sequencing was performed by Genome Quebec using an Illumina MiSeq with V3 chemistry at 2 × 300 bp paired-end (PE) configuration (Illumina, San Diego, California, USA). Each PCR was conducted in a 7 µL reaction: 1X PCR Buffer with 18mM MgCl 2 (Roche), 5% DMSO (Roche), 0.2 mM dNTP mix (NEB), 0.02 U/µL FastStart High Fi (Roche), 0.5 µM NS31 primer, 0.5 µM AML2 primer, 1 µL of 10-fold diluted DNA template and molecular grade water. Thermocycler conditions were as follows: denaturation at 94 °C for 3 min; 35 cycles of 94 °C for 45 s, 63 °C for 60 s, 72 °C for 90 s; and a final extension of 72 °C for 10 min. Sequence processing was performed in QIIME2 with Dada2 (Bolyen et al. ). 5’ ends of the forward reads were trimmed at 21 bp, and the 3’ ends of the reverse reads were trimmed at 22 bp, corresponding to a median QC over 20. Forward reads were then truncated to a max of 295 bp and reverse reads were truncated to a max of 283 bp. Adaptor sequences were removed using filterANDtrim. Sequences were then dereplicated, chimeras were removed, and amplicon sequence variants (ASVs) were resolved with Dada2 in QIIME2. Taxonomy was assigned using BLAST + with a trained MAARJAM v2 database to derive virtual taxon assignments (VTX) (Opik et al. ). Due to the high number of unknown assignments using VTXs, α-diversity and β-diversity measures were calculated using the original ASVs. Sequences were not rarefied. Prior to quality filtering, read counts ranged from 13,776 to 263,566 reads per sample for a total of 15,253,943 reads. After merging reads, 3,659,625 reads passed quality filtering (23% of the total read count). The average read count in the soil samples was 3,121 reads (with a maximum of 21,582 reads) and the average read count in the biochar samples was 1,367 (with a maximum of 22,085 reads). A total of 2886 AMF ASVs were identified. All sequences identified matched to Glomeromycota, with 88.7% of these supporting classification past the order level. Statistical analysis was performed in R v3.6.4 (R Development Core Team, 2008). Diversity analyses were conducted in the Phyloseq R package and included α-diversity (richness, Pielou’s evenness, and Shannon diversity) and β-diversity metrics, including Bray-Curtis distance matrices (McMurdie and Holmes ). Normalization of ASV counts for β-diversity was undertaken using a variance stabilizing transformation implemented in the DeSeq2 R package (Love et al. ). The β-diversity metrics were visualized in non-metric multidimensional scaling (NMDS) and dbRDA plots in vegan (Oksanen et al. ). Soil chemical properties were used as explanatory variables to determine their effects on AMF community composition. Forward selection of environmental variables was used to reduce multicollinearity of the model. The vegan R package was used to determine the significance of β-diversity differences with PERMANOVAs and correlations with environmental variables using the mantel test (Oksanen et al. ). Count data were not normalized for α-diversity or relative abundance analysis. Linear mixed models were constructed (LMM; Method = REML) for comparisons of α-diversity between soil sites (Beaverlodge, Vauxhall, Olds, and Cranford; within either bulk soil or biochar packets), between nutrient amendments (F, B + M, and B + F; within either bulk soil or biochar packets), and between Bulk Soil vs. Biochar packets themselves. The homogeneity of variance, normality and outliers of the residuals were assessed in DHARMa. Data were square-root transformed where appropriate to correct for non-normality and heteroscedasticity in the R package MASS. Significant differences in α-diversity were assessed using a two-way ANOVA with a Tukey post-hoc test when appropriate in R. Relative abundance tables were made using the Phyloseq package in R and stacked bar graphs were created using ggplot2 (Wickham ). All β-diversity, α-diversity, relative abundances, and correlation analyses were performed using ASV data. Spearman correlations between α-diversity metrics and environmental parameters were conducted in corrplot v0.92. Multiple comparison corrections were performed using a Benjamini-Hochberg multiple comparisons method. PERMANOVA analysis showed that community composition differed in site ( P < 0.001), sample type (soil or biochar packet) ( P < 0.001), and amendment ( P < 0.001), with substantial differences in taxonomy, particularly across sample types (Supp. Table ). Average richness across all samples was 39.74 ± 22.45, average Pielou’s evenness was 0.49 ± 0.16, and average Shannon diversity was 1.76 ± 0.62. Across all soils, richness negatively correlated with net seed dry weight ( r = -0.37, P = 0.033) and δ 13 C‰ ( r = -0.41, P = 0.033), and Shannon diversity was negatively correlated with net seed dry weight ( r = -0.31, P = 0.001). Community composition was significantly correlated with EC, TC, TN, WETN, and total quinoa biomass according to Mantel tests, although all correlations were weak (Supp. Table ). Most sequences belonged to Paraglomerales (78%), with 18% belonging to Glomerales, 4% belonging to Archeosporales, and < 1% belonging to Diversisporales. Across soil and biochar samples, 18 VTs were observed, three of which, VTX0039, VTX00155, and VTX00419, were unique to the biochar samples (Supp. Table ). The most abundant VTs were VTX00348 (82.7%), VTX00444 (10.2%), VTX00067 (2.4%), and VTX00004 (1.1%) the first two of which belonged to Paraglomerales and were found in both soil and biochar. VTX00348 was the single most abundant VT in all sites, in both soil and biochar samples, and in all soil amendments except for B + F. Most VTs were below 5% relative abundance. AMF communities are distinct between each soil type Diversity, and community composition differed significantly across sites. Main effects in relative abundance show differences between soil types, treatments, and illustrate the differences of AMF identified in biochar packets (Fig. ). Glomerales had higher relative abundance in Beaverlodge and Vauxhall than in Cranford and Olds soils (Fig. ). Paraglomerales were most abundant in all soils and biochar packets, and were reduced with the application of NP-fertilizer amended biochar to soils. Every soil type harbored a distinct AMF community ( P = 0.001) (Supp. Table 4). Community differences were supported by dbRDA ordination plots (Fig. a). Vauxhall soils had higher Pielou’s evenness than Olds soils ( P = 0.01) and Beaverlodge ( P = 0.04), and Vauxhall soils had higher Shannon diversity than Olds ( P = 0.01) and Beaverlodge ( P = 0.01). All soil and yield parameters correlated with AMF community composition (TN, TC, WETN, EC, and biomass etc.) with the exception of NH 4 + and whole plant tissue weight (including seeds; Supp. Table ). The environmental parameters with the most explanatory power regarding AMF community composition were TC, Biomass, and pH (Fig. a). Indicator species analysis identified 0, 4, 4, and 4 unique indicator ASVs within Beaverlodge, Cranford, Olds, and Vauxhall soils, respectively ( P < 0.05) (Supp. Table 5). Biochar colonization While ASVs were used for β-diversity and α-diversity metrics, sequences were also taxonomically assigned. Within the ASV dataset, only 12 VTXs were identified in biochar packets, including two VTXs which were not observed in the soils (Glomus VTX00419 and Glomus VTX00155) compared to 17 in soil samples (Fig. c). Consequently, AMF communities differed between soil and biochar packets, with significantly lower richness and diversity in biochar packets (Fig. b; Table ; Suppl. Table ). To ensure PERMANOVA results were attributable to community composition differences and not dispersion differences, a betadispersion analysis was performed, showing that dispersion did not differ significantly between soil and biochar compartments ( P = 0.5345). An average of 29.87 ASVs were present in biochar, composed primarily of Paraglomerales taxa, while an average of 50.20 ASVs found in the surrounding soils (Table ) primarily comprised Glomerales. Differential abundance analysis showed 14 Glomerales ASVs which were most abundant in bulk soil samples, while 6 Paraglomerales ASVs were most abundant in biochar packets. Amendment treatments While amendment likely contributed to community compositional differences, there were no significant differences in community composition except between B + M and B + F ( P < 0.001). Community composition of all soil amendments resembled those of the C soils ( P > 0.05). Ordination revealed no reliable trend between soil amendment and community composition (Fig. a; Suppl. Table 7). Amendments had no impact on observed AMF ASV richness, however fertilizer addition increased Shannon diversity and Pielou’s evenness (Table ). All nutrient additions increased Archaeosporales relative abundance versus the C treatment (Fig. ). B + F increased Glomerales relative abundance and decreased Paraglomerales abundance relative to the C treatment (Fig. ). Diversity, and community composition differed significantly across sites. Main effects in relative abundance show differences between soil types, treatments, and illustrate the differences of AMF identified in biochar packets (Fig. ). Glomerales had higher relative abundance in Beaverlodge and Vauxhall than in Cranford and Olds soils (Fig. ). Paraglomerales were most abundant in all soils and biochar packets, and were reduced with the application of NP-fertilizer amended biochar to soils. Every soil type harbored a distinct AMF community ( P = 0.001) (Supp. Table 4). Community differences were supported by dbRDA ordination plots (Fig. a). Vauxhall soils had higher Pielou’s evenness than Olds soils ( P = 0.01) and Beaverlodge ( P = 0.04), and Vauxhall soils had higher Shannon diversity than Olds ( P = 0.01) and Beaverlodge ( P = 0.01). All soil and yield parameters correlated with AMF community composition (TN, TC, WETN, EC, and biomass etc.) with the exception of NH 4 + and whole plant tissue weight (including seeds; Supp. Table ). The environmental parameters with the most explanatory power regarding AMF community composition were TC, Biomass, and pH (Fig. a). Indicator species analysis identified 0, 4, 4, and 4 unique indicator ASVs within Beaverlodge, Cranford, Olds, and Vauxhall soils, respectively ( P < 0.05) (Supp. Table 5). While ASVs were used for β-diversity and α-diversity metrics, sequences were also taxonomically assigned. Within the ASV dataset, only 12 VTXs were identified in biochar packets, including two VTXs which were not observed in the soils (Glomus VTX00419 and Glomus VTX00155) compared to 17 in soil samples (Fig. c). Consequently, AMF communities differed between soil and biochar packets, with significantly lower richness and diversity in biochar packets (Fig. b; Table ; Suppl. Table ). To ensure PERMANOVA results were attributable to community composition differences and not dispersion differences, a betadispersion analysis was performed, showing that dispersion did not differ significantly between soil and biochar compartments ( P = 0.5345). An average of 29.87 ASVs were present in biochar, composed primarily of Paraglomerales taxa, while an average of 50.20 ASVs found in the surrounding soils (Table ) primarily comprised Glomerales. Differential abundance analysis showed 14 Glomerales ASVs which were most abundant in bulk soil samples, while 6 Paraglomerales ASVs were most abundant in biochar packets. While amendment likely contributed to community compositional differences, there were no significant differences in community composition except between B + M and B + F ( P < 0.001). Community composition of all soil amendments resembled those of the C soils ( P > 0.05). Ordination revealed no reliable trend between soil amendment and community composition (Fig. a; Suppl. Table 7). Amendments had no impact on observed AMF ASV richness, however fertilizer addition increased Shannon diversity and Pielou’s evenness (Table ). All nutrient additions increased Archaeosporales relative abundance versus the C treatment (Fig. ). B + F increased Glomerales relative abundance and decreased Paraglomerales abundance relative to the C treatment (Fig. ). Naturally occurring Paraglomus appear to predominately colonize non-activated biochar (Fig. ). Despite differences in AMF communities among soil types acquired from distant locations with distinct characteristics (Table ), we nonetheless found select AMF in biochar packets across these soil types and across treatments (Fig. ). We found two Glomus and one Gigaspora virtual taxa exclusively in the biochar packets (Suppl. Table 3) and never in the soil samples. Because AMF are obligate biotrophs and require a host plant to proliferate with the fungus growing towards the biochar through the soil while actively deriving all carbon directly from quinoa root cells, it is unlikely that these were absent from the soils. Rather, in all likelihood, we did not sample intensively enough to capture these three virtual taxa in soil. Of the 19 virtual taxa found in soils, 12 also were found in biochar showing that 7 virtual taxa were not represented in the biochar. This observation could represent AMF with traits which do not form long hyphal strands throughout soil and thus did not extend far enough to colonize the packets. The AMF absent from biochar also might have the capacity to detect nutrients, and detecting none (the biochar was non-activated, meaning no nutrients were added) (Table ), they explored other patches of soil that did contain nutrients. Finally, our sampling effort might have missed detecting these AMF in biochar. Nevertheless, the edaphically distinct soils used in this study harboured distinct AMF communities (Fig. ) and those communities shifted with amendment addition. Despite differences in AMF community composition between soil types and amendments, we found Paraglomus preferentially colonized biochar. The relative abundance of Paraglomeraceae may be unaffected by or increased in non-activated biochar packets, suggesting it may display unique hyphal exploratory traits. However, it is critical to note that high throughput sequencing data is compositional, and therefore an increase in Paraglomeraceae abundance may instead reflect the reduced abundance of other AMF taxa. AMF life history traits include colonizing ability, dispersal ability, stress tolerance, disturbance tolerance, reproduction versus vegetative growth investment, and reproductive mode (Hart and Reader ; Powell et al. ; Horsch et al. ). Paraglomus may have a life history strategy that includes investment into hyphal exploratory structures (absorptive hyphae, runner hyphae and hyphal bridges) over the formation of more internal root colonization structures (infection units, hyphae, contact points) (Hart and Reader ). Paraglomeraceae are reported to be largely absent from plant roots and AMF spore communities, while also forming the most abundant taxonomic group within soils (Hempel et al. ). Therefore, it is possible that the differences seen here between soil and biochar communities may be attributable to Paraglomus life history strategies. While many taxonomic groups invest more energy into external structures, (including Archaeosporales, Diversisporaceae, and Acaulosproraceae) only Paraglomeraceae was enriched in our biochar packets. Furthermore, the biochar used in this study was not activated, that is, no nutrients were added to the packets prior to the experimental setup. The non-activated biochar had a higher C: N ratio than the surrounding soil, representing an environment with lower nitrogen content (Table ). Thus, Paraglomeraceae is either stimulated by low nitrogen conditions or more likely Paraglomeraceae constitutively explore soil without the capacity to detect and alter hyphal exploration in response to nutrient patches. Biochar represents a unique habitat of rough porous materials with large amounts of aromatic carbon which is capable of harbouring AMF within small microsites (Warnock et al. ; Romero et al. ). Incubation experiments of biochar with soil microbial communities have identified microbial oxidation of biochar, both in the presence and absence of soil (Kuzyakov et al. ; Zimmerman ). Phosphate-solubilizing bacteria (PSB) attached to AMF hyphae facilitate metabolism and uptake of phosphate by AMF (Sharma et al. ). Therefore, bacterially mediated degradation of biochar may be facilitated by AMF colonization of biochar. This degradation may take the form of carbon oxidation, and/or phosphate-solubilization from biochar, liberating phosphorus and recalcitrant carbon compounds from biochar to AMF and the surrounding soil. Thus, synergism between PSB and AMF might contribute to the degradation of biochar and increased availability of nutrients to plant communities. Alternatively, the host plant quinoa may preferentially associate with Paraglomeraceae and therefore enrich it over other AMF species present in the soils. Quinoa associates with AMF (Wieme et al. ), however, to our knowledge, a preference for Paraglomeraceae has not been reported. AMF colonization of quinoa roots was found to be lower than that of other crops such as wheat, chickpea ( Cicer arietinum ), and barley (Wieme et al. ). Furthermore, Cai et al. found bacterial and fungal community diversity associated with quinoa increased with elevation. They suggested that root associating fungal communities were deterministic with respect to edaphic characteristics. González-Teuber et al. reported that quinoa was significantly affected by the presence of endophyte fungi, and surmised that quinoa may benefit in drought conditions from endophytic associations. Conversely, (Urcelay et al. ) found no AMF colonization in quinoa with the presence of a pathogenic root fungus. Future studies should determine the extent of root colonization of quinoa by local AMF within Canadian soils. Paraglomus has been found to be selected by other plants. For example, (Xiao et al. ) found that the growth of the invasive plant Chromolaena ordorata resulted in increased Paraglomus in soils, correlating with improved competitive outcomes for the plant. Plants exposed to high-stress environments may depend on Paraglomus for water retention and nutrients (Zhang et al. ). Paraglomus relative abundance increases in cropping systems with low soil pH (Dai et al. ), is widespread throughout agricultural soils (Gosling et al. ), and occupies low pH niches globally (Davison et al. ). However, Gosling et al. found Paraglomus associated more often with organic than with conventional farming practices. The relationships between AMF community composition, diversity, and abundance with P and N availability have been well documented (George et al. ; Treseder and Allen ; Johnson ; Qiu et al. ). Biogeographic studies have identified strong influences on AMF diversity of high temperature, low C, low N pressing community composition in one direction and high precipitation, low pH, low K, and low P pressing community composition in the other direction (Davison et al. ). This study supports such findings, showing that distinct communities from soils sourced across a wide geographic range were associated with a strong pH gradient and differed considerably in nutrient content (Table ). Thus soil physico-chemical parameters appeared to influence AMF community composition in our study. Similar to other studies, N content in soils can shift AMF communities and can decrease AMF diversity (Zhang et al. ). As AMF abundance and diversity increase in low nutrient conditions, increased sporulation and spore-carrying hyphal structures are likely an indication of the stimulation of AMF by host plants in response to lack of adequate nutrients. This is supported within our dataset wherein soils with low N and P availability also exhibited elevated richness and a distinct community structure from other soils; these also were associated with diminished Glomeralesrelative abundance. It could be that what we missed observing Glomerales that was in roots, as it is often more abundant in roots than in soil (Hart and Reader ). Interestingly, NP-fertilizer addition decreased Paraglomerales, but increased the Glomerales relative abundances, whereas Paraglomerales abundance increased with manure addition, while Glomus abundance decreased with manure addition, suggesting an opposing relationship with a complex carbon-rich nutrient amendment. Sheldrake et al. showed that removing litter as a complex nutrient source shifted AMF community composition in soils, and (Elzobair et al. ) showed significant changes in AMF abundance with biochar and manure amendment. Our findings build upon these previous works, suggesting that AMF community composition can also be affected by nutrient amendments known to alter AMF absolute abundance. We suggest that quinoa is able to associate with a variety of AMF found across arable soils in western Canada. Paraglomeraceae predominately colonizes non-activated biochar and this may represent hyphal exploratory traits unique to the taxon. Biochar was colonized by select naturally occurring soil-derived AMF with site of soil collection a strong indicator of AMF community composition. Below is the link to the electronic supplementary material. Supplementary Material 1
Using Noninvasive Electrophysiology to Determine Time Windows of Neuroprotection in Optic Neuropathies
1e4dd1bc-79b6-4783-aa1a-0610cd580699
9145583
Physiology[mh]
The death of retinal ganglion cells (RGCs) and their axons is the final common pathway of optic neuropathies resulting in loss of vision . Neuroprotective strategies aimed at preventing loss of RGCs and sparing their function have been an area of intense investigation in animal models . The great majority of experimental studies on neuroprotective strategies have been performed in glaucoma models using a large variety of neuroprotectants targeting multiple molecular pathways, often with impressive positive effects . While neuroprotection studies in experimental models provide powerful proofs of principle, translation of neuroprotective strategies to the clinical application remains elusive . One caveat of experimental models is that they are a gross approximation of the corresponding clinical condition , resulting in limited concordance of treatment effects between preclinical models and clinical trials. Another limitation is that results obtained in animal models most often reflect neuroprotective protocols started in temporal proximity of the induction of the pathological condition, while in the clinical condition therapeutical options are generally initiated after diagnosis that may occur relatively late over the course of the disease. A further limitation is that the sophisticated methods to assess RGC structure and function in experimental models are not generally applicable in the clinical setting. Here, we offer a perspective on the optimal time window for neuroprotective treatments to rescue RGC from death and preserve their function based on noninvasive methods to assess RGC functional integrity that can be used both in experimental models and clinical trials. In progressive optic neuropathies, the tipping point represents the idealized transition from a physiological state to a pathological state. During the period preceding the tipping point (critical period) accumulating adverse factors eventually overwhelm homeostatic mechanisms and cause irreversible and progressive cell death. The duration of the critical period of transition can be of the order of years, as in glaucoma, or months, as in Leber’s Hereditary Optic Neuropathy (LHON) , and its identification would provide a red flag of impending disease and an opportunity to consider neuroprotective treatment in a time window where altered conditions may be still capable of reversal. While the tipping point is a well-established intuitive concept , its identification is challenging as phenotypic expression and molecular changes occurring during the critical period overlap with those of the normal condition, and homeostatic neuroplasticity mechanisms to maintain normal vision offset pathological alteration . Later stages are dominated by cell survival and associated maladaptive processes including rewiring of the neural tissue and disruption of function that define the manifest disease state . As sketched in , there are several potential therapeutical time windows for neuroprotection, each of them probably resulting in a different outcome. Prophylactic neuroprotection (Rx_t0 in ) based on risk factors only is not currently considered in the clinical setting . Typically, neuroprotective actions are considered by the time the disease is manifest (Rx_t3, Rx_t4 in ) with the goal of slowing further damage. If a goal of neuroprotective therapy is to preserve RGC integrity and have a long-term efficacy, then it should be initiated as early as possible, ideally at pre-clinical stages (Rx_t1, Rx_t2, in ) where adaptive neural mechanisms may be still reversible. At preclinical stages, noninvasive structural RGC and RNFL assessments are unlikely to provide meaningful clinical indications . In contrast, adaptive changes occurring during the critical period may impair the electrophysiological response of RGCs to visual stimuli, which can be used as biomarker of impending disease, to monitor its progression, and to provide a rationale for initiating neuroprotective treatment. The electrophysiological activity of RGCs and their axons can be tested with specific variants of the electroretinogram (ERG) . The best-understood and most sensitive technique is the ERG in response to contrast-reversing patterns (Pattern Electroretinogram, PERG). While the precise cellular sources of the PERG signal are not known, the PERG depends on the presence of functional RGCs, as it is rapidly abolished after the optic nerve crush that results in RGC degeneration, while the standard ERG remains unaffected. Both spiking and nonspiking electrical activity contribute to the PERG. Compared to the standard ERG, the PERG has a much smaller amplitude. However, using state-of-the-art equipment with robust averaging and processing to improve the signal-to-noise ratio, the PERG can now be easily recorded from surface adhesive electrodes in human and subdermal electrodes in mice . The PERG may be altered before histological loss of RGCs in glaucoma and optic neuropathies in both human and animal models . The PERG can also inform about the response dynamics over a range of visual stimuli of different strength as well as the ability to autoregulate under physiologically stressful conditions such as body inversion or flicker-induced increase in metabolic demand . Both response dynamics and autoregulation may provide useful biomarkers to establish altered RGC function not associated with cell death . Typically, neuroprotection studies in experimental models of optic neuropathies quantify the effect of treatment by comparing the RGC/axon number of an independent control group with that of the study group at a given endpoint. Noninvasive electrophysiology such as PERG provides longitudinal information on overall RGC function from baseline to endpoint, and additionally it provides unique information on the acute effect of treatment and the time course of the effect, which includes the potential neuroenhancement effect as well as the potential toxic effect and is useful for screening purposes. A strong proof of concept for the use of PERG as a biomarker of premanifest disease is offered by the DBA/2J mouse strain, which spontaneously develops a pigment-liberating iris disease, resulting in age-related IOP elevation and glaucoma . A compares the time course of IOP, PERG amplitude, and optic nerve axon number as a function of age of DBA/2J mice . The IOP increases moderately between 2 and 7 months, and more sharply thereafter, when the optic nerve starts losing axons. By the time axon loss is noticeable at about 8 months of age, the PERG signal has already lost over 50% of baseline amplitude at 2 months of age. This indicates that RGCs become dysfunctional before they die. Multiple regression analysis of data shown in A reveals that age (Log p = 28.5) plays a larger role than IOP (Log p = 3.8) in progressive loss of PERG signal. The horizontal distance between the decay curves of function and structure provides an estimate of the lifespan of sick RGCs, which represents the time window of opportunity for treatment to prevent RGC death. The vertical distance between the decay curves of function and structure provides an estimate of RGC dysfunction that is not accounted for by cell death, which is potentially reversible . The comparison between the time courses of PERG amplitude and axon number ( A) offers an opportunity to investigate the relationship between RGC dysfunction and death . The working hypothesis assumes that at any given time point the residual PERG amplitude reflects the summed contribution of still normal RGCs, the reduced contribution of sick RGCs, and the null contribution of dead/lost RGCs, each in relative proportions. The residual axon count reflects the remaining number of RGCs. The hypothesis also assumes that at each successive timepoint a constant proportion of RGCs becomes sick (decay rate b ), functions at reduced capacity (dysfunction coefficient d ) and survives for a limited amount of time (time lag τ between sick and dead RGCs). These events will be reflected in progressive loss of PERG amplitude and axon number, with the former expected to anticipate loss compared to the latter. These parameters can be included in a simple mathematical model that best fits the structural (axon number) and functional (PERG amplitude) time courses. In the example of A the parameters that best fit the curves are decay rate b = 0.3/month, dysfunction coefficient d = 0.5 of normal, and sick-to-dead time τ = 6.5 months. Using these parameters, it is possible to estimate at each timepoint the proportion of healthy, sick, and dead RGCs ( B). Although the simple model shown in B has obvious limitations, it is useful to show that by the time RGCs start dying at about 7.5 months of age, most RGCs are sick and there are fewer healthy RGCs left. By 10 months of age there are no heathy RGGs left, while there are fewer sick RGCs to repair together with a growing population of dead RGCs. This has implications for choosing the appropriate time window for preventing RGC dysfunction (Rx_t1 in ), preventing RGC dysfunction and repairing ongoing RGC dysfunction (Rx_t2 in ), or limiting the rate of RGC death (Rx_t3, Rx_t4 in ). Neuroprotective strategies in different time windows do not necessarily use similar pharmacological approaches and may result in distinctive outcomes for residual RGC function and RGC number. Analogue models to that shown in may be hypothesized for a variety of conditions impacting the susceptibly and lifespan of RGCs together with their ability to generate electrical signals under a protracted degenerative process. Longitudinal clinical data in early glaucoma patients also show progressive loss of PERG signal, anticipating comparable loss of retinal nerve fiber thickness by several years. The rate of progressive PERG loss in glaucoma suspects may be reduced with IOP-lowering treatment . In human LHON, sudden and severe visual loss often begins with one eye first, usually followed by similar loss in the fellow eye few months later . In unilateral LHON cases, the PERG signal is much altered not only in the symptomatic eye, but also in the asymptomatic eye . This suggests that in the asymptomatic eye there is manifest RGC dysfunction preceding RGC death that may be potentially prevented with a timely neuroprotective intervention, including gene therapy . It is conceivable that PERG testing in LHON carriers may anticipate conversion from asymptomatic to symptomatic stage and thus inform timing of neuroprotective therapy. Neuroprotection refers to the relative preservation of neuronal structure and/or function independently of the primary cause of neuronal insult . Ideally, neuroprotection should extend the lifespan of functional RGCs, but this may not always be the case. In principle, neuroprotective strategies that target downstream molecular pathways of cell death such as caspases may keep RGCs on life support for a long time, but these RGCs are not expected to be fully functional. In contrast, strategies that enhance RGC function in the short term do not necessarily alter the rate of progression and may even accelerate RGC death . Noninvasive electrophysiology such as PERG provides the necessary functional outcome to assess the ability of RGCs to generate electrical signals under a protracted degenerative process with or without the presence of neuroprotective treatments. Notably, the PERG can provide a unique contribution to document altered dynamics of RGC function before the tipping point (critical period in ), which would also represent a rationale for early treatment. 5.1. RGC Excitability The PERG signal depends not only on the presence of functional RGCs, but also on the molecular environment that controls neuronal excitability, such as neurotrophic factors . For example, BDNF/TrkB interaction controls RGC intrinsic excitability by shifting polarization of the membrane potential . In healthy mice, a retrobulbar injection of lidocaine (axon transport blocker) does not induce RGC death but rapidly and reversibly reduces the PERG signal . These effects are believed to be induced by deficiency of retrograde signaling in the optic nerve, in particular shortage of neurotrophic factors derived from brain targets via retrograde axonal transport . Axon transport defects are known to play a critical role in the early stage in neurodegenerative disease including glaucoma and LHON . Early PERG impairment in glaucoma and optic neuropathies may be at least in part due to altered axonal transport that reduces RGC excitability. Changes in RGC excitability are reflected in the dynamics of the PERG response . In the normal mouse, the PERG amplitude increases with increasing contrast approximately in a linear manner (i.e., the PERG amplitude at 20% contrast is about 20% of the amplitude at 100% contrast). Although there are measurable differences in PERG contrast dependence in different mouse strains a strong departure from linearity occurs when availability of neurotrophic factors is altered . shows that in naive C57BL/6J mice, the PERG amplitude at 20% contrast is much lower than that at 100% contrast. In C57BL/6J mice receiving an intravitreal injection of BDNF or in C57BL/6J mice who had a chronic lesion of the superior colliculus—resulting in a compensatory upregulation of endogenous BDNF in the retina —the PERG amplitude at low contrast is higher than that of control C57BL/6J mice. Although the mechanisms underlying altered PERG contrast dependence (neurotrophic support/expression, synaptic transmission, plasticity) are only conjectural, changes of PERG dynamics can be used to detect and monitor early changes in RGC excitability. 5.2. RGC Adaptation Rapid dilation of retinal vessels in response to flickering light or fast-reversing patterns (functional hyperemia) is a well-known autoregulatory response driven by increased neural activity in the inner retina . Sustained metabolic stress may in turn influence RGC function, and this is reflected in a progressive decline of the PERG signal to a plateau (adaptation) over 2–4 min . PERG adaptation occurs in mice as well as in humans , and represents an index of normal neurovascular autoregulation triggered by a metabolic challenge. PERG adaptation may be reduced or absent when RGCs are dysfunctional as in glaucoma or in optic neuritis . For hypothesis-testing purposes, several models can explain the PERG adaptation dynamics. The model sketched in C is based on an energy budget model in a neurovascular-glial network that can be reduced to a simple electrical circuit and mathematical equation . Independently of the underlying mechanisms, PERG adaptation dynamics can be used to detect and monitor altered autoregulation of RGCs together with the neurovascular-glial network impinging on them. As shown in , in DBA/2J glaucoma the PERG amplitude progressively decreases with increasing age followed by loss of RGCs . In DBA/2J mice, retinal levels of nicotinamide adenine dinucleotide (NAD+, a key molecule in energy and redox metabolism) decrease with age and render aging neurons vulnerable to disease-related insults . Oral administration of the NAD+ precursor nicotinamide (vitamin B3) spares RGCs and their function at older ages . The magnitude of PERG adaptation also decreases with increasing age . However, prophylaxis with a diet rich in vitamin B3, in addition to saving functional RGCs, also spares the PERG autoregulatory dynamic range in response to flicker . 5.3. RGC Susceptibility to Stress Stress tests such as physical exercise are widely employed to investigate altered heart dynamics and are also used in eye diseases. Recovery of vision and VEP amplitude after exposure to a bright light (photostress) may be prolonged in macular diseases and in optic neuritis . Temporary IOP elevation can be induced with head-down (HD) body posture. In DBA/2J mice of different ages, head-down (HD) tilt of 60 degrees causes an IOP elevation of about 5 mm Hg . The PERG of young mice is unaffected by HD, but it becomes substantially depressed in older mice even before the onset of RGC death, suggesting susceptibility to HD stress . In human subjects, HD tilt of 10 degrees induces IOP elevation of about 3 mm Hg on average . While the PERG of normal subjects is not altered by HD tilt, it becomes substantially depressed in a subpopulation of glaucoma suspects . Longitudinal observation of HD-susceptible glaucoma suspects has shown that most of them developed RNFL thinning over 5 years . The PERG signal depends not only on the presence of functional RGCs, but also on the molecular environment that controls neuronal excitability, such as neurotrophic factors . For example, BDNF/TrkB interaction controls RGC intrinsic excitability by shifting polarization of the membrane potential . In healthy mice, a retrobulbar injection of lidocaine (axon transport blocker) does not induce RGC death but rapidly and reversibly reduces the PERG signal . These effects are believed to be induced by deficiency of retrograde signaling in the optic nerve, in particular shortage of neurotrophic factors derived from brain targets via retrograde axonal transport . Axon transport defects are known to play a critical role in the early stage in neurodegenerative disease including glaucoma and LHON . Early PERG impairment in glaucoma and optic neuropathies may be at least in part due to altered axonal transport that reduces RGC excitability. Changes in RGC excitability are reflected in the dynamics of the PERG response . In the normal mouse, the PERG amplitude increases with increasing contrast approximately in a linear manner (i.e., the PERG amplitude at 20% contrast is about 20% of the amplitude at 100% contrast). Although there are measurable differences in PERG contrast dependence in different mouse strains a strong departure from linearity occurs when availability of neurotrophic factors is altered . shows that in naive C57BL/6J mice, the PERG amplitude at 20% contrast is much lower than that at 100% contrast. In C57BL/6J mice receiving an intravitreal injection of BDNF or in C57BL/6J mice who had a chronic lesion of the superior colliculus—resulting in a compensatory upregulation of endogenous BDNF in the retina —the PERG amplitude at low contrast is higher than that of control C57BL/6J mice. Although the mechanisms underlying altered PERG contrast dependence (neurotrophic support/expression, synaptic transmission, plasticity) are only conjectural, changes of PERG dynamics can be used to detect and monitor early changes in RGC excitability. Rapid dilation of retinal vessels in response to flickering light or fast-reversing patterns (functional hyperemia) is a well-known autoregulatory response driven by increased neural activity in the inner retina . Sustained metabolic stress may in turn influence RGC function, and this is reflected in a progressive decline of the PERG signal to a plateau (adaptation) over 2–4 min . PERG adaptation occurs in mice as well as in humans , and represents an index of normal neurovascular autoregulation triggered by a metabolic challenge. PERG adaptation may be reduced or absent when RGCs are dysfunctional as in glaucoma or in optic neuritis . For hypothesis-testing purposes, several models can explain the PERG adaptation dynamics. The model sketched in C is based on an energy budget model in a neurovascular-glial network that can be reduced to a simple electrical circuit and mathematical equation . Independently of the underlying mechanisms, PERG adaptation dynamics can be used to detect and monitor altered autoregulation of RGCs together with the neurovascular-glial network impinging on them. As shown in , in DBA/2J glaucoma the PERG amplitude progressively decreases with increasing age followed by loss of RGCs . In DBA/2J mice, retinal levels of nicotinamide adenine dinucleotide (NAD+, a key molecule in energy and redox metabolism) decrease with age and render aging neurons vulnerable to disease-related insults . Oral administration of the NAD+ precursor nicotinamide (vitamin B3) spares RGCs and their function at older ages . The magnitude of PERG adaptation also decreases with increasing age . However, prophylaxis with a diet rich in vitamin B3, in addition to saving functional RGCs, also spares the PERG autoregulatory dynamic range in response to flicker . Stress tests such as physical exercise are widely employed to investigate altered heart dynamics and are also used in eye diseases. Recovery of vision and VEP amplitude after exposure to a bright light (photostress) may be prolonged in macular diseases and in optic neuritis . Temporary IOP elevation can be induced with head-down (HD) body posture. In DBA/2J mice of different ages, head-down (HD) tilt of 60 degrees causes an IOP elevation of about 5 mm Hg . The PERG of young mice is unaffected by HD, but it becomes substantially depressed in older mice even before the onset of RGC death, suggesting susceptibility to HD stress . In human subjects, HD tilt of 10 degrees induces IOP elevation of about 3 mm Hg on average . While the PERG of normal subjects is not altered by HD tilt, it becomes substantially depressed in a subpopulation of glaucoma suspects . Longitudinal observation of HD-susceptible glaucoma suspects has shown that most of them developed RNFL thinning over 5 years . Noninvasive, longitudinal assessment of RGC function appears to be a needed diagnostic tool in optic neuropathies. A substantial body of evidence supports the use of PERG to assess the ability of RGCs to generate electrical signals under a protracted degenerative process with or without the presence of neuroprotective treatments, which may have both diagnostic and prognostic values. Further, the PERG can provide a unique contribution to document altered dynamics of RGC function in response to stimuli of different intensity and under different physiological stressors, which may occur before the tipping point and provide the rationale for early treatment. Indeed, a goal of neuroprotective approaches should be preserving and restoring RGC integrity. The PERG can also be useful to screen acute neuroenhancement and toxic effects of neuroprotective drugs. User-friendly versions of the PERG technology are now commercially available for both clinical and experimental use.
A Preliminary Survey of Rheumatologists on the Management of Late-onset Rheumatoid Arthritis in Japan
2cfca589-b961-4384-b309-951064ff9063
11729168
Internal Medicine[mh]
Rheumatoid arthritis (RA) is a chronic autoimmune disease characterized by synovitis, which leads to bone destruction and functional disability . RA predominantly affects young to middle-aged women ; however, recent epidemiological data have shown an increase in the number of cases of late-onset RA (LORA) in an era of rapid population aging . LORA is characterized by rapid onset, intense inflammation, predominant involvement of large joints, and rheumatoid factor (RF) or anti-cyclic citrullinated peptide (anti-CCP) antibody seronegativity, which is distinct from young-onset RA (YORA) . Patients with LORA tend to have several medical and social concerns such as comorbidities, impaired physiological and cognitive function, frailty, and difficult life circumstances . It has also been reported that the frequency of glucocorticoid use increases with patient age in clinical practice , which causes side effects including osteoporosis, infection, cardiovascular events . Current evidence regarding the treatment strategies utilized for RA is mainly based on results of clinical trials that have included patients with YORA , and previous evidence or recommendations may not be applicable to LORA. Therefore, establishing an optimal treatment strategy focusing on aged patients with unique clinical phenotypes and multifactorial problems is currently an unmet need. In this study, we investigated the current opinions regarding the management of LORA among rheumatologists in clinical practice. Study protocol The survey was performed as part of the study sponsored by The Japan Agency for Medical Research and Development (AMED) project to establish best practice guidelines for patients with LORA based on the ethical spirit of the Declaration of Helsinki and in compliance with relevant laws and regulations in Japan, following the central collective review and approval from the Ethics and Conflict of Interest Committee of the National Center for Geriatrics and Gerontology (approval number 1543). In this study, which was performed in October 2021, we sent self-administered questionnaires by postal mail to 65 rheumatologists certified by The Japan College of Rheumatology. Among experienced rheumatology experts deemed eligible for the study, we selected participants to ensure that they were not biased by hospital type (tertiary hospitals or clinics), practice areas (city or rural settings), and specialty (physicians or surgeons). This study included 38 physicians (58%) and 27 surgeons (42%). We surveyed 35 (54%) rheumatologists from academic hospitals, 19 (29%) from general hospitals, and 11 (17%) from private clinics . The mean duration of rheumatologist experience was 31 years. The questionnaire consisted of multiple choice and descriptive formulae to investigate knowledge regarding LORA and its treatment in real-world clinical practice [ (English version) and (Japanese version)]. Statistical analysis All responses were evaluated by two independent investigators (Satoshi Takanashi and Yuko Kaneko). Proportions were calculated as the percentage of response rates for the multiple-choice questions. We scrutinized the descriptions in detail and cited or summarized the information for all the descriptive data. All statistical analyses were performed using the EZR software program (version 1.61) . The survey was performed as part of the study sponsored by The Japan Agency for Medical Research and Development (AMED) project to establish best practice guidelines for patients with LORA based on the ethical spirit of the Declaration of Helsinki and in compliance with relevant laws and regulations in Japan, following the central collective review and approval from the Ethics and Conflict of Interest Committee of the National Center for Geriatrics and Gerontology (approval number 1543). In this study, which was performed in October 2021, we sent self-administered questionnaires by postal mail to 65 rheumatologists certified by The Japan College of Rheumatology. Among experienced rheumatology experts deemed eligible for the study, we selected participants to ensure that they were not biased by hospital type (tertiary hospitals or clinics), practice areas (city or rural settings), and specialty (physicians or surgeons). This study included 38 physicians (58%) and 27 surgeons (42%). We surveyed 35 (54%) rheumatologists from academic hospitals, 19 (29%) from general hospitals, and 11 (17%) from private clinics . The mean duration of rheumatologist experience was 31 years. The questionnaire consisted of multiple choice and descriptive formulae to investigate knowledge regarding LORA and its treatment in real-world clinical practice [ (English version) and (Japanese version)]. All responses were evaluated by two independent investigators (Satoshi Takanashi and Yuko Kaneko). Proportions were calculated as the percentage of response rates for the multiple-choice questions. We scrutinized the descriptions in detail and cited or summarized the information for all the descriptive data. All statistical analyses were performed using the EZR software program (version 1.61) . Demographic characteristics All the 65 rheumatologists responded to the questionnaire. Among the 65 rheumatologists surveyed, 22 (34%) attended to 500-1,000 patients with RA, 17 (26%) attended to 1,000-1,500 patients, and 12 (18%) attended to 1,500-3,000 patients . The number of patients newly diagnosed with RA in one year varied from <20 to >160 . With regard to patient age, 47 rheumatologists (72%) responded that >50% of newly diagnosed patients were aged ≥65 years , and 27 rheumatologists (42%) responded that >30% of patients were aged ≥75 years . Question 1: What is the optimal cut-off age to define LORA? The cutoff ages used to define LORA were as follows: 16 (25%) rheumatologists considered 65 years and 23 (35%) considered 70 years or 75 years for each . Question 2: Please specify the main concerns associated with the management of LORA and the measures you adopt to address these issues. Rheumatologists' responses were as follows (number of responders, %): ・Renal function decline based on measurement of the estimated glomerular filtration rate (63, 97%), serum cystatin C (6, 9%), and serum creatinine (6, 9%) levels. ・Cognitive function decline, based on clinical impression or information from the family (55, 85%), and use of validated scales such as the Mini-Mental Status Examination (MMSE) or the Hasegawa Dementia Scale (7, 11%). ・Physical function decline, based on patients' use of auxiliary tools such as a cane (48, 74%) or wheelchair (45, 69%), patients' walking speed (31, 48%), and application of validated measures such as the Health Assessment Questionnaire Disability Index (7, 11%). ・Pulmonary issues based on evaluation of computed tomography findings (54, 83%), smoking habits (48, 74%), chest radiography findings (20, 31%), and pulmonary function test results (17, 26%). ・Life environment concerns were assessed through patient interviews by rheumatologists themselves (53, 82%) or by nurses (31, 48%). ・Miscellaneous factors, including a history of malignancy (13, 20%), infection (13, 20%), osteoporosis (12, 18%), liver dysfunction (11, 17%), diabetes mellitus (8, 12%), cardiac disease (8, 12%), and utilization of the social care system (3, 5%). Descriptive comments: “Optimal management of LORA warrants attention to a wide range of concerns because older patients have many issues.” Question 3: Please specify the treatment goal for optimal management of LORA and the therapeutic regimens you prefer in these patients Treatment goal The achievement of remission or low disease activity was the treatment goal highlighted by 42 (65%) rheumatologists, whereas the maintenance of activities of daily living and pain alleviation aimed at patient safety was the goal highlighted by 11 (17%) rheumatologists . Descriptive comments: “Therapeutic targets will be individualized on a case-by-case basis.” Treatment regimens Twenty-four (37%) rheumatologists used methotrexate with the addition of biological disease-modifying antirheumatic drugs (bDMARDs) and Janus kinase (JAK) inhibitors based on the European League Against Rheumatism recommendations 2019 , and 16 (25%) rheumatologists used combinations of multiple conventional synthetic disease-modifying antirheumatic drugs (csDMARDs) and/or low-dose glucocorticoids rather than methotrexate, compared with treatment regimens used for YORA. Descriptive comments: “Methotrexate, bDMARDs, and JAK inhibitors are used cautiously to avoid the risk of infection.” Question 4: Please describe the rationale to avoid methotrexate administration and for not increasing the methotrexate dose (if not contraindicated) and mention the alternatives to methotrexate prescribed in such cases. Most rheumatologists are concerned about the use of methotrexate or an increase in its dose in patients with impaired renal and/or cognitive function, lung diseases, methotrexate-associated adverse effects, such as oral ulcers, alopecia, liver dysfunction, or a history of lymphoproliferative disorders. Thirty-seven (57%) rheumatologists reported the addition of bDMARDs/JAK inhibitors or other csDAMRDs as an alternative to increasing the methotrexate dose, 16 (25%) reported the addition of bDMARDs/JAK inhibitors, and 12 (18%) reported the addition of other csDMARDs . Descriptive comments: “Notably, csDMARDs other than methotrexate are used in patients with low disease activity. However, interleukin-6 inhibitors, which are effective without concomitant use of methotrexate, are considered in patients who show high disease activity.” Question 5: Please mention the interval between csDMARD initiation and addition of or switch to a bDMARD or JAK inhibitor Thirty-six (55%) rheumatologists indicated that this interval was within three months, 21 (32%) indicated an interval of within six months, and nine (14%) indicated an interval of within one year. Question 6: Please mention the rationale for not prescribing bDMARDs or JAK inhibitors (if not contraindicated) and specify the alternatives prescribed in such cases Forty-four (68%) rheumatologists ascribed this decision to the patients' economic status, 30 (46%) avoided these prescriptions for safety concerns, and 12 (18%) avoided these prescriptions in patients with concomitant malignancies. Descriptive comments: “bDMARDs/JAK inhibitors are expensive and unaffordable for older patients who depend on pensions.” Question 7: Please describe the strategy used to select the appropriate bDMARDs or JAK inhibitors All rheumatologists in this survey preferred bDMARDs to JAK inhibitors based on differences between these drug classes with regard to drug metabolism, a risk of herpes zoster, and a possible risk of malignancy or major adverse cardiovascular events. Descriptive comments: “I tend to avoid JAK inhibitors, particularly for patients with cognitive function decline because these patients may be unable to manage medications independently.” “In patients with an inadequate response to bDMARDs, I consider switching to JAK inhibitors.” Question 8: Please describe your approach to prioritize disease control, safety, and patient's wish over your therapeutic strategy in patients with LORA We observed that 40 (63%) rheumatologists considered safety to be their highest priority, 17 (27%) considered control of disease activity, and seven (11%) prioritized patients' wishes over safety or disease activity . One rheumatologist did not respond to this question. Question 9: Please specify whether you evaluate frailty in clinical practice We observed that only five (8%) rheumatologists assessed frailty using validated criteria such as Fried's criteria (2, 40%), the 25-question Geriatric Locomotive Function Scale (2, 40%) , or the comprehensive geriatric assessment-7 tool (1, 20%) . Descriptive answers: “I tend to lower treatment goals and recommend rehabilitation in patients with frailty.” Question 10: Please mention when you consult or collaborate with specialists from other departments Forty-seven (72%) rheumatologists consulted and collaborated with physicians from other departments to outline the treatment strategy for patients with LORA, particularly for the management of patients with comorbidities, such as malignancy, non-tuberculous mycobacterial infection, and interstitial lung diseases. Additionally, 5 (8%) rheumatologists consulted other physicians for polypharmacy management. Descriptive comments: “Patients with LORA who have multiple comorbidities that warrant further evaluation or suggestions from specialists in other departments need referrals to general or academic hospitals and cannot be treated by rheumatologists who practice in private clinics.” Question 11: Please describe the factors associated with treatment withdrawal in patients with LORA Forty-nine (75%) rheumatologists indicated that adverse events such as infection, an exacerbation of comorbidities, and the development of malignancies necessitated the withdrawal of treatment. With regard to miscellaneous factors, 12 (18%) rheumatologists withdrew treatment due to the admission of patients to nursing homes, six (9%) indicated that treatment withdrawal was secondary to difficulty with hospital visits due to impaired activities of daily living, and two (3%) rheumatologists mentioned economic concerns as factors associated with treatment withdrawal. Question 12: Please describe factors associated with a decline in the rate of patients' hospital visits Thirty-five (54%) rheumatologists indicated physical disability in reduced hospital visits, 18 (28%) rheumatologists indicated exacerbation of comorbidities, and 14 (22%) rheumatologists indicated that admission to a nursing home contributed to reduced hospital visits. Question 13: Please discuss the relevant considerations in the management of LORA “Establishment of a treatment target and evaluation of the risk-benefit ratio are debatable in patients in whom implementation of the treat-to-target (T2T) strategy is challenging” “It is necessary to establish RA guidelines specific to older patients.” “Multiple comorbidities and polypharmacy are serious concerns particularly relevant to older patients.” “Familial or social support is important for safe management of drugs, particularly methotrexate or self-injection agents, and for timely monitoring of the patient's general condition. DMARD administration may need to be withdrawn in the absence of such support.” All the 65 rheumatologists responded to the questionnaire. Among the 65 rheumatologists surveyed, 22 (34%) attended to 500-1,000 patients with RA, 17 (26%) attended to 1,000-1,500 patients, and 12 (18%) attended to 1,500-3,000 patients . The number of patients newly diagnosed with RA in one year varied from <20 to >160 . With regard to patient age, 47 rheumatologists (72%) responded that >50% of newly diagnosed patients were aged ≥65 years , and 27 rheumatologists (42%) responded that >30% of patients were aged ≥75 years . The cutoff ages used to define LORA were as follows: 16 (25%) rheumatologists considered 65 years and 23 (35%) considered 70 years or 75 years for each . Rheumatologists' responses were as follows (number of responders, %): ・Renal function decline based on measurement of the estimated glomerular filtration rate (63, 97%), serum cystatin C (6, 9%), and serum creatinine (6, 9%) levels. ・Cognitive function decline, based on clinical impression or information from the family (55, 85%), and use of validated scales such as the Mini-Mental Status Examination (MMSE) or the Hasegawa Dementia Scale (7, 11%). ・Physical function decline, based on patients' use of auxiliary tools such as a cane (48, 74%) or wheelchair (45, 69%), patients' walking speed (31, 48%), and application of validated measures such as the Health Assessment Questionnaire Disability Index (7, 11%). ・Pulmonary issues based on evaluation of computed tomography findings (54, 83%), smoking habits (48, 74%), chest radiography findings (20, 31%), and pulmonary function test results (17, 26%). ・Life environment concerns were assessed through patient interviews by rheumatologists themselves (53, 82%) or by nurses (31, 48%). ・Miscellaneous factors, including a history of malignancy (13, 20%), infection (13, 20%), osteoporosis (12, 18%), liver dysfunction (11, 17%), diabetes mellitus (8, 12%), cardiac disease (8, 12%), and utilization of the social care system (3, 5%). Descriptive comments: “Optimal management of LORA warrants attention to a wide range of concerns because older patients have many issues.” Treatment goal The achievement of remission or low disease activity was the treatment goal highlighted by 42 (65%) rheumatologists, whereas the maintenance of activities of daily living and pain alleviation aimed at patient safety was the goal highlighted by 11 (17%) rheumatologists . Descriptive comments: “Therapeutic targets will be individualized on a case-by-case basis.” Treatment regimens Twenty-four (37%) rheumatologists used methotrexate with the addition of biological disease-modifying antirheumatic drugs (bDMARDs) and Janus kinase (JAK) inhibitors based on the European League Against Rheumatism recommendations 2019 , and 16 (25%) rheumatologists used combinations of multiple conventional synthetic disease-modifying antirheumatic drugs (csDMARDs) and/or low-dose glucocorticoids rather than methotrexate, compared with treatment regimens used for YORA. Descriptive comments: “Methotrexate, bDMARDs, and JAK inhibitors are used cautiously to avoid the risk of infection.” The achievement of remission or low disease activity was the treatment goal highlighted by 42 (65%) rheumatologists, whereas the maintenance of activities of daily living and pain alleviation aimed at patient safety was the goal highlighted by 11 (17%) rheumatologists . Descriptive comments: “Therapeutic targets will be individualized on a case-by-case basis.” Twenty-four (37%) rheumatologists used methotrexate with the addition of biological disease-modifying antirheumatic drugs (bDMARDs) and Janus kinase (JAK) inhibitors based on the European League Against Rheumatism recommendations 2019 , and 16 (25%) rheumatologists used combinations of multiple conventional synthetic disease-modifying antirheumatic drugs (csDMARDs) and/or low-dose glucocorticoids rather than methotrexate, compared with treatment regimens used for YORA. Descriptive comments: “Methotrexate, bDMARDs, and JAK inhibitors are used cautiously to avoid the risk of infection.” Most rheumatologists are concerned about the use of methotrexate or an increase in its dose in patients with impaired renal and/or cognitive function, lung diseases, methotrexate-associated adverse effects, such as oral ulcers, alopecia, liver dysfunction, or a history of lymphoproliferative disorders. Thirty-seven (57%) rheumatologists reported the addition of bDMARDs/JAK inhibitors or other csDAMRDs as an alternative to increasing the methotrexate dose, 16 (25%) reported the addition of bDMARDs/JAK inhibitors, and 12 (18%) reported the addition of other csDMARDs . Descriptive comments: “Notably, csDMARDs other than methotrexate are used in patients with low disease activity. However, interleukin-6 inhibitors, which are effective without concomitant use of methotrexate, are considered in patients who show high disease activity.” Thirty-six (55%) rheumatologists indicated that this interval was within three months, 21 (32%) indicated an interval of within six months, and nine (14%) indicated an interval of within one year. Forty-four (68%) rheumatologists ascribed this decision to the patients' economic status, 30 (46%) avoided these prescriptions for safety concerns, and 12 (18%) avoided these prescriptions in patients with concomitant malignancies. Descriptive comments: “bDMARDs/JAK inhibitors are expensive and unaffordable for older patients who depend on pensions.” All rheumatologists in this survey preferred bDMARDs to JAK inhibitors based on differences between these drug classes with regard to drug metabolism, a risk of herpes zoster, and a possible risk of malignancy or major adverse cardiovascular events. Descriptive comments: “I tend to avoid JAK inhibitors, particularly for patients with cognitive function decline because these patients may be unable to manage medications independently.” “In patients with an inadequate response to bDMARDs, I consider switching to JAK inhibitors.” We observed that 40 (63%) rheumatologists considered safety to be their highest priority, 17 (27%) considered control of disease activity, and seven (11%) prioritized patients' wishes over safety or disease activity . One rheumatologist did not respond to this question. We observed that only five (8%) rheumatologists assessed frailty using validated criteria such as Fried's criteria (2, 40%), the 25-question Geriatric Locomotive Function Scale (2, 40%) , or the comprehensive geriatric assessment-7 tool (1, 20%) . Descriptive answers: “I tend to lower treatment goals and recommend rehabilitation in patients with frailty.” Forty-seven (72%) rheumatologists consulted and collaborated with physicians from other departments to outline the treatment strategy for patients with LORA, particularly for the management of patients with comorbidities, such as malignancy, non-tuberculous mycobacterial infection, and interstitial lung diseases. Additionally, 5 (8%) rheumatologists consulted other physicians for polypharmacy management. Descriptive comments: “Patients with LORA who have multiple comorbidities that warrant further evaluation or suggestions from specialists in other departments need referrals to general or academic hospitals and cannot be treated by rheumatologists who practice in private clinics.” Forty-nine (75%) rheumatologists indicated that adverse events such as infection, an exacerbation of comorbidities, and the development of malignancies necessitated the withdrawal of treatment. With regard to miscellaneous factors, 12 (18%) rheumatologists withdrew treatment due to the admission of patients to nursing homes, six (9%) indicated that treatment withdrawal was secondary to difficulty with hospital visits due to impaired activities of daily living, and two (3%) rheumatologists mentioned economic concerns as factors associated with treatment withdrawal. Thirty-five (54%) rheumatologists indicated physical disability in reduced hospital visits, 18 (28%) rheumatologists indicated exacerbation of comorbidities, and 14 (22%) rheumatologists indicated that admission to a nursing home contributed to reduced hospital visits. “Establishment of a treatment target and evaluation of the risk-benefit ratio are debatable in patients in whom implementation of the treat-to-target (T2T) strategy is challenging” “It is necessary to establish RA guidelines specific to older patients.” “Multiple comorbidities and polypharmacy are serious concerns particularly relevant to older patients.” “Familial or social support is important for safe management of drugs, particularly methotrexate or self-injection agents, and for timely monitoring of the patient's general condition. DMARD administration may need to be withdrawn in the absence of such support.” In this preliminary survey, we investigated perspectives on the treatment strategy adopted by certified Japanese rheumatologists for patients with LORA. Our results suggest that, in addition to RA, patients with LORA have several concerns, such as comorbidities, financial constraints, and life circumstances that interfere with standard or recommended treatment implementation, which has led to wide disparities in the management strategies adopted even among experienced rheumatologists. Further large-scale surveys that include rheumatologists currently treating patients with LORA are warranted. Our study also highlights the need to establish optimal clinical practice guidelines for the effective management of older patients with RA. There has been no national or international definition regarding the cutoff age for LORA; however, the cutoff age of 60 or 65 years for LORA has been used in previous studies , which reflects the cutoff age for general older people defined by the World Health Organization and Japan Geriatrics Society. However, our study revealed that approximately 70% of rheumatologists considered 70 or 75 years to be suitable as the cutoff age for LORA. This may reflect rheumatologists' perception that the clinical features of RA change for patients in their 70s or treatment strategies should be changed for patients in their 70s. Our previous study showed that the clinical characteristics of RA changed between 68 and 73 years of age . The number of LORA cases is increasing in the current era of rapid population aging . Age-induced changes in physical, organ system, and cognitive functions create challenges for rheumatologists with regard to the initiation of methotrexate and bDMARD/JAK inhibitor therapy or implementation of the T2T strategy in clinical practice. The T2T strategy using methotrexate and the addition of bDMARDs or JAK inhibitors is effective with an acceptable safety profile in patients with LORA ; however, 35% of patients failed to adhere to T2T owing to comorbidities or patients' decisions concerning safety issues. Notably, comorbidities such as lung diseases and chronic kidney disease are significantly associated with difficult-to-treat RA . This multifactorial complexity has resulted in considerable diversity in the management strategies adopted by experienced rheumatologists, as demonstrated in this study, which emphasizes the need to establish appropriate approaches to determine the risk-benefit ratio of the various therapeutic options. Moreover, the clinical phenotype of LORA is occasionally similar to that of polymyalgia rheumatica, which, however, is distinct from that of YORA , thus highlighting the differences between the pathogenesis of LORA and YORA. A previous study reported that the serum interleukin-6 levels were significantly higher and the tumor necrosis factor-α levels were significantly lower in patients with LORA than in those with YORA . Moreover, it is important to consider specific features, such as frailty and sarcopenia, which are typically associated with older populations . With regard to glucocorticoid use, methotrexate and concomitant moderate-dose glucocorticoid administration served as an effective and safe initial treatment strategy to induce remission in patients with early-stage RA ; however, the outcomes in patients with LORA remain unclear. The Glucocorticoid Low-dose in Rheumatoid Arthritis (GLORIA) trial, a recent pragmatic randomized trial, investigated the efficacy and safety of add-on low-dose prednisolone (5 mg/day) in patients aged ≥65 years with established RA . Add-on low-dose prednisolone therapy shows beneficial long-term effects in older patients with RA, with a trade-off of a 24% increase in non-severe adverse events over 24 months. However, this study did not investigate older patients with new-onset disease. Moreover, long-term glucocorticoid use (>2 years) is associated with cardiovascular diseases , osteoporosis , severe infectious adverse events , and sarcopenia. Furthermore, tapering and discontinuation are difficult in real-world practice. Given the multiple side effects of glucocorticoids, controlling disease activity with the T2T strategy of methotrexate and bDMARDs or JAK inhibitors in the early phase of the disease with minimum or no glucocorticoids is important for prolonging the healthy life expectancy of patients with LORA. Establishing a systematic assessment of safety-related risk factors specific to LORA will also help in decision making. Further studies targeting patients with LORA are therefore warranted to establish optimal treatment regimens. This study is associated with several limitations. First, the number of participants was small and limited to experienced rheumatology experts selected by the authors, which yielded in some selection bias; therefore, the overall opinions of general rheumatologists may be different from the results of our study. A nationwide survey involving a larger number of rheumatologists is warranted to verify the results of this study. Second, the results of this study may also contain biases derived from the super-aged society in Japan. Our results need to be validated in a larger multinational population. This study highlights a variety of rheumatologists' perspectives regarding the treatment strategies used for patients with LORA, thus suggesting that the accumulation of more evidence from real-world data of LORA is important to establish appropriate treatment approaches for this patient population. Satoshi Takanashi: Speaking fees, Asahi Kasei, Astellas, Bristol Myers Squibb, Eisai, Eli Lilly, Janssen, Mitsubishi Tanabe, Taisho and UCB. Yuko Kaneko: Research grants, AbbVie, Eisai, Sanofi, Chugai, Mitsubishi Tanabe and Taisho; Scholarship grants, Asahi Kasei, Eisai, Boehringer Ingelheim and Taisho; Speaking fees, AbbVie, Asahi Kasei, Astellas, Ayumi Pharmaceutical, Boehringer Ingelheim, Bristol Myers Squibb, Chugai, Eisai, Eli Lilly, Glaxo Smith Kline, Novartis, Pfizer, Janssen, UCB and Gilead Sciences. Yutaka Kawahito; Speaking fees, Asahi Kasei, Astellas, Eli Lilly, Daiichi Sankyo, Bristol Myers Squibb, Eisai, Chugai and Janssen; Scholarship grants, AbbVie, Eisai and Chugai; Clinical trial fees, AbbVie and Eisai. Takahiko Sugihara: Research grants, Asahi Kasei, Daiichi Sankyo, Chugai Pharmaceutical and Ono Pharmaceutical; Honoraria, Abbvie Japan, Asahi Kasei, Astellas Pharma, Ayumi Pharmaceutical, Bristol Myers Squibb, Chugai Pharmaceutical, Eli Lilly Japan, Mitsubishi Tanabe Pharma, Ono Pharmaceutical, Pfizer Japan, Taisho Pharmaceutical, Takeda Pharmaceutical and UCB Japan. Toshihisa Kojima: Research funding, AbbVie, Eli Lilly, Chugai and Astellas; Speaking fees, Chugai, Pfizer, Bristol Myers Squibb, Daiichi Sankyo, Eli Lilly and Astellas. Akio Morinobu: Research funding, AbbVie, Asahi Kasei, Chugai and Ono Pharmaceutical; Speaking fees, Eli Lilly, AbbVie, Ono Pharmaceutical; Pfizer, Astellas, Chugai, Eisai and Bristol Myers Squibb. Toshihiro Matsui: Speaking fees, Eli Lilly, Chugai, and Ono Pharmaceutical; Research funding, Astellas, Glaxo Smith Kline and AbbVie; Scholarship grant, Chugai and Asahi Kasei. Hajime Ishikawa: Speaker and Writing fees, Chugai and Bristol Myers Squibb; Research funding, IQVIA, Corronal, Eli Lilly, Eisai and Gilead Sciences; Scholarship grants, IQVIA, Corronal, Eli Lilly, Eisai and Gilead Sciences. Keiichiro Nishida: Speaking fees, Asahi Kasei, Pfizer, Chugai and Daiichi Sankyo; Scholarship grant, Chugai. Isao Matsushita: Lecture fees, AbbVie and Astellas. Eiichi Tanaka: Speaking fees, AbbVie, Asahi Kasei, Astellas, Ayumi Pharmaceutical, Bristol Myers Squibb, Celltrion, Chugai, Daiichi Sankyo, Eisai, Eli Lilly, GlaxoSmithKline, Kyowa Pharma Chemical, Janssen, Mitsubishi Tanabe, Mochida Pharmaceutical Plant, Nippon Kayaku, Pfizer, Takeda, Teijin Nakashima Medical and UCB; Consulting fees, AbbVie, Asahi Kasei, Astellas, Ayumi Pharmaceutical, Bristol Myers Squibb, Celltrion, Chugai, Daiichi Sankyo, Eisai, Eli Lilly, GlaxoSmithKline, Kyowa Pharma Chemical, Janssen, Mitsubishi Tanabe, Mochida Pharmaceutical Plant, Nippon Kayaku, Pfizer, Takeda, Teijin Nakashima Medical and UCB. Shintaro Hirata: Research grants, AbbVie, Asahi Kasei, Eisai, Otsuka Pharmaceutical, Sanofi, Shionogi, Chugai, Pfizer, Mitsubishi Tanabe, Eli Lilly and UCB; Consulting fees, AbbVie, Astellas, Eisai, Gilead Sciences, Eli Lilly and Bristol Myers Squibb; Speaking fees, AbbVie, Asahi Kasei, Astellas, Ayumi Pharmaceutical, Bristol Myers Squibb, Celgene, Chugai, Eisai, Gilead Sciences, GlaxoSmithKline, Eli Lilly, Janssen, Kyorin Pharmaceutical, Novartis, Pfizer, Sanofi and Mitsubishi Tanabe. Mitsumasa Kishimoto: Speaking fees, Chugai, Pfizer, AbbVie, Mitsubishi Tanabe, Eisai, Eli Lilly, Daiichi Sankyo, Astellas, Ayumi Pharmaceutical, Ono Pharmaceutical, Asahi Kasei, Janssen, Amgen, Gilead Sciences, UCB, Bristol Myers Squibb and Novartis. Hiromu Ito: Research funding, Bristol Myers Squibb, Eisai, Taisho and Mochida Pharmaceutical Plant. Toshihiko Hidaka: Speaking fees, AbbVie, Asahi Kasei Pharma, Bristol Myers Squibb, Chugai Pharmaceutical, Eisai, Eli Lilly Japan, Pfizer Japan and Janssen. Motomu Hashimoto: Research grants, AbbVie, Asahi Kasei, Astellas, Bristol Myers Squibb, Eisai Daiichi Sankyo, Eli Lilly and Novartis; Speaking fees, Eli Lilly, Chugai, Mitsubishi Tanabe, Bristol Myers Squibb and Eisai. Masayoshi Harigai: Research grants, AbbVie, Asahi Kasei, Astellas, Ayumi Pharmaceutical, Bristol Myers Squibb, Chugai, Eisai, Mitsubishi Tanabe, Nippon Kayaku and Taisho; Consulting fees, AbbVie and Bristol Myers Squibb; Speaker fees, AbbVie, Mitsubishi Tanabe, Chugai and Eli Lilly. Takashi Kida: Speaking fees, Asahi Kasei, Chugai, Eisai and Mitsubishi Tanabe. This work was supported by AMED under Grant Number JP21ek0410086h0001. JP22ek0410086h0002 and the Research Funding for Longevity Sciences from National Center for Geriatrics and Gerontology (21-19). Satoshi Takanashi received a research grant from JSPS KAKENHI Grant (22K16350) and The JCR Grant for Promoting Research for D2T RA, Keio University of Medicine, Keio Medical Association. Supplementary Figure 1. Treatment regimen when a rheumatologist does not prescribe methotrexate. Thirty-seven (57%) rheumatologists answered the addition of bDMARDs/JAK inhibitors or other csDMARDs as an alternative to increasing the methotrexate dose, 16 (25%) rheumatologists answered addition of bDMARDs/JAK inhibitors, and 12 (18%) answered addition of other csDMARDs. Supplementary Table 1. Questionnaire in English Supplementary Table 2. Questionnaire in Japanese
Capsid Integrity Detection of Enteric Viruses in Reclaimed Waters
6985f791-d86f-45d7-b284-85a62f184285
11209584
Microbiology[mh]
In recent years, wastewater-based epidemiology has become a useful tool for tracking pathogens with notable epidemiological implications. Recent studies have successfully applied this approach to detect a range of viruses, including the re-emergence of poliovirus in New York , the tracking of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) , monkeypox virus , and human enteric viruses . Furthermore, the analysis of viruses in water serves to evaluate the safety of aquatic environments and determine the suitability of reclaimed water for various purposes, such as recreational, agricultural, and industrial applications . Human enteric viruses, primarily transmitted through the fecal–oral route (direct contact with infected individuals, or ingestion of contaminated food and water), are responsible for viral gastroenteritis, hepatitis, and other diseases . Furthermore, both symptomatic and asymptomatic individuals can shed up to 10 13 viral particles per gram of stool . Wastewater treatment systems are designed to reduce the concentration of organic matter, suspended solids, and pathogenic microorganisms . However, enteric viruses tend to be more persistent in the environment and resistant to the removal and disinfection processes typically applied by wastewater treatment plants (WWTPs) . As a result, achieving water quality with the complete removal of viral particles from reclaimed water and preventing their presence in environmental samples has proven challenging . Considering the current water scarcity and adverse climatic conditions, it is imperative to reuse available water resources, particularly reclaimed water for agricultural purposes, as agriculture consumes a high proportion of water . However, inappropriate use of reclaimed water has led to outbreaks of viral infectious diseases worldwide , and the reuse of wastewater in agriculture can pose health risks associated with the consumption of fresh vegetables and berries . Escherichia coli and other fecal indicator bacteria are commonly used for assessing the microbial quality of WWTP effluent; however, many studies have demonstrated that these methods may not accurately represent the spectrum of pathogens present in feces, particularly human enteric viruses . Therefore, the European Regulation (EU) 2020/741 has established a minimum requirement of ≥6 Log 10 reduction in the concentration of F-specific coliphages, somatic coliphages, or total coliphages for the use reclaimed water for agricultural irrigation. In addition to fecal indicator bacteria and coliphages, the use of crAssphage has been proposed in recent years to estimate viral contamination in environmental waters and to assess the efficiency of viral removal during wastewater treatment . Currently, real-time polymerase chain reaction (qPCR) is the method of choice to monitor human viral pathogens in wastewater and environmental samples . However, qPCR methods cannot discriminate between infectious viruses, inactivated viruses, or free viral genomes. To address this limitation, samples can be pretreated with intercalating dyes such as propidium monoazide (PMA), ethidium monoazide, or platinum compounds. These dyes selectively allow the detection of viruses with intact capsids, providing a more accurate assessment of viral infectivity . Thus, in this study influent wastewater and reclaimed water samples were analyzed for the presence of human pathogenic viruses over ten months using rapid molecular methods. Additionally, an optimized PMAxx-RT-qPCR method was developed to infer viral infectivity in both sample types, particularly in reclaimed water intended for irrigation. This study also aimed to investigate the correlation between crAssphage and somatic coliphages with the presence of human enteric viruses. 2.1. Sampling Site and Sample Collection Influent wastewater (n = 30) and reclaimed water (n = 30) samples were collected from May 2022 to March 2023 from a WWTP in the Comunitat Valenciana (Spain) serving 170,000 inhabitants. In the sampled wastewater treatment plant, reclamation processes include tertiary UV treatment combined with chlorination. Grab samples (200 mL) were collected in sterile HDPE plastic containers (Labbox Labware, Barcelona, Spain), placed on ice, and transported to the laboratory. Upon arrival, they were kept refrigerated at 4 °C and concentrated within 24 h. 2.2. Somatic Coliphages Determination To quantify the levels of somatic coliphages, an aliquot of the water samples was filtered through sterile filters with a pore size of 0.45 μm. The commercial Bluephage Easy Kit for Enumeration of Somatic Coliphages (Bluephage S.L., Barcelona, Spain) was used according to the manufacturer’s instructions. 2.3. Virus Concentration Influent wastewater and reclaimed water samples were artificially inoculated with approximately 7 Log 10 genome copies (gc)/L of mengovirus (MgV) strain vMC0 (CECT 100000) as a process control. Samples were concentrated using an aluminum hydroxide adsorption–precipitation method 7 . In brief, 200 mL of each sample was adjusted to pH 6.0, and an Al(OH) 3 precipitate was formed by adding 1 part of 0.9 N AlCl 3 solution to 100 parts of the sample. After adjusting the pH back to 6.0, the sample was mixed using an orbital shaker for 15 min at room temperature (RT). The viruses were then collected by centrifugation, and the pellet was resuspended in 10 mL of 3% beef extract pH 7.4. After shaking for 10 min, the water concentrate was recovered by centrifugation, resuspended in 1 mL of PBS, and stored at −80 °C. 2.4. Nucleic Acid Extraction, Detection and Quantification Nucleic acid extraction from influent wastewater and reclaimed water concentrates was performed using the Maxwell ® RSC Instrument (Promega, Madison, WI, USA) with the Maxwell RSC Pure Food GMO and authentication kit (Promega) and the “Maxwell RSC Viral Total Nucleic Acid” running program . For viral detection and quantification, different kits and instruments were used depending on the targeted virus. The One Step PrimeScript™ RT-PCR Kit (Perfect Real Time, Takara Bio Inc., San Jose, CA, USA) was used for the detection and quantification of the MgV. The RNA UltraSense One-Step kit (Invitrogen, Waltham, MA, USA) was used for the detection of human norovirus (HuNoV) genogroup I (GI), HuNoV GII, and rotavirus (RV) as previously described . The QuantStudio™ 5 Real-Time PCR (Applied Biosystem, Waltham, MA, USA) and the LightCycler ® 480 instrument (Roche Diagnostics, Basel, Switzerland) were used for the PCR amplification. The qPCR Premix Ex Taq™ kit (Takara Bio Inc.) was used for the detection of crAssphage . Primers, probes, and (RT)-qPCR conditions used in the study are listed in . Moreover, undiluted and 10-fold diluted nucleic acid extracts were tested in duplicate to check for inhibitors. Different controls were used in all assays: negative extraction control consisting of PBS; whole process control to monitor the process efficiency of each sample (MgV); and positive (reference material) and negative (RNase-free water) (RT)-qPCR controls. Standard DNA material for crAssphage, HuNoV GI, HuNoV GII, and RV for standard curve generation relied on customized gBlock gene fragments (Integrated DNA Technologies, Coralville, IA, USA). 2.5. Viral Capsid Integrity Assay in Sewage Samples and Optimization in Influent Wastewater To assess the integrity of viral capsids on sewage samples, a protocol based on capsid integrity to PMAxx was evaluated . Briefly, samples were placed in DNA LoBind 1.5 mL tubes (Eppendorf, Hamburg, Germany), and the photoactivable dye PMAxx TM (Biotium, Fremont, CA, USA) was added to 300 µL of each concentrated influent wastewater sample at 100 µM final concentration along with 0.5% Triton 100-X (Thermo Fisher Scientific, Valencia, Spain) and then incubated in the dark at RT for 10 min at 150 rpm. Later, samples were photoactivated for 15 min using a Led-Active Blue system (GenIUL, Barcelona, Spain), and nucleic acid extraction was carried as described above. Due to the initially observed underperformance of this procedure, the capsid integrity assay was further optimized by diluting the concentrates in PBS (5-fold and 2-fold) and incorporating an additional sample incubation and photoactivation cycle. PMAxx-RT-qPCR optimization assays were conducted targeting HuNoV GI, HuNoV GII, and RV in influent wastewater samples exposed or not to thermal inactivation at 99 °C for 5 min. 2.6. Statistical Analysis Statistical analyses were performed using GraphPad Prism version 5.0 (GraphPad, La Jolla, CA, USA). Data were checked for normality distribution using the Shapiro–Wilk normality test. Non-parametric tests, such as the Kruskal–Wallis test with Dunn’s multiple comparisons post-test and Spearman ρ coefficient non-parametric correlation test, were used to compare viral loads between influent wastewater and reclaimed water, assess distribution of enteric viruses, and determine the correlation between viral titers. A t -test was used to analyze differences in viral removal after capsid integrity treatment. The significance level was set at a p -value cut-off of 0.05. Influent wastewater (n = 30) and reclaimed water (n = 30) samples were collected from May 2022 to March 2023 from a WWTP in the Comunitat Valenciana (Spain) serving 170,000 inhabitants. In the sampled wastewater treatment plant, reclamation processes include tertiary UV treatment combined with chlorination. Grab samples (200 mL) were collected in sterile HDPE plastic containers (Labbox Labware, Barcelona, Spain), placed on ice, and transported to the laboratory. Upon arrival, they were kept refrigerated at 4 °C and concentrated within 24 h. To quantify the levels of somatic coliphages, an aliquot of the water samples was filtered through sterile filters with a pore size of 0.45 μm. The commercial Bluephage Easy Kit for Enumeration of Somatic Coliphages (Bluephage S.L., Barcelona, Spain) was used according to the manufacturer’s instructions. Influent wastewater and reclaimed water samples were artificially inoculated with approximately 7 Log 10 genome copies (gc)/L of mengovirus (MgV) strain vMC0 (CECT 100000) as a process control. Samples were concentrated using an aluminum hydroxide adsorption–precipitation method 7 . In brief, 200 mL of each sample was adjusted to pH 6.0, and an Al(OH) 3 precipitate was formed by adding 1 part of 0.9 N AlCl 3 solution to 100 parts of the sample. After adjusting the pH back to 6.0, the sample was mixed using an orbital shaker for 15 min at room temperature (RT). The viruses were then collected by centrifugation, and the pellet was resuspended in 10 mL of 3% beef extract pH 7.4. After shaking for 10 min, the water concentrate was recovered by centrifugation, resuspended in 1 mL of PBS, and stored at −80 °C. Nucleic acid extraction from influent wastewater and reclaimed water concentrates was performed using the Maxwell ® RSC Instrument (Promega, Madison, WI, USA) with the Maxwell RSC Pure Food GMO and authentication kit (Promega) and the “Maxwell RSC Viral Total Nucleic Acid” running program . For viral detection and quantification, different kits and instruments were used depending on the targeted virus. The One Step PrimeScript™ RT-PCR Kit (Perfect Real Time, Takara Bio Inc., San Jose, CA, USA) was used for the detection and quantification of the MgV. The RNA UltraSense One-Step kit (Invitrogen, Waltham, MA, USA) was used for the detection of human norovirus (HuNoV) genogroup I (GI), HuNoV GII, and rotavirus (RV) as previously described . The QuantStudio™ 5 Real-Time PCR (Applied Biosystem, Waltham, MA, USA) and the LightCycler ® 480 instrument (Roche Diagnostics, Basel, Switzerland) were used for the PCR amplification. The qPCR Premix Ex Taq™ kit (Takara Bio Inc.) was used for the detection of crAssphage . Primers, probes, and (RT)-qPCR conditions used in the study are listed in . Moreover, undiluted and 10-fold diluted nucleic acid extracts were tested in duplicate to check for inhibitors. Different controls were used in all assays: negative extraction control consisting of PBS; whole process control to monitor the process efficiency of each sample (MgV); and positive (reference material) and negative (RNase-free water) (RT)-qPCR controls. Standard DNA material for crAssphage, HuNoV GI, HuNoV GII, and RV for standard curve generation relied on customized gBlock gene fragments (Integrated DNA Technologies, Coralville, IA, USA). To assess the integrity of viral capsids on sewage samples, a protocol based on capsid integrity to PMAxx was evaluated . Briefly, samples were placed in DNA LoBind 1.5 mL tubes (Eppendorf, Hamburg, Germany), and the photoactivable dye PMAxx TM (Biotium, Fremont, CA, USA) was added to 300 µL of each concentrated influent wastewater sample at 100 µM final concentration along with 0.5% Triton 100-X (Thermo Fisher Scientific, Valencia, Spain) and then incubated in the dark at RT for 10 min at 150 rpm. Later, samples were photoactivated for 15 min using a Led-Active Blue system (GenIUL, Barcelona, Spain), and nucleic acid extraction was carried as described above. Due to the initially observed underperformance of this procedure, the capsid integrity assay was further optimized by diluting the concentrates in PBS (5-fold and 2-fold) and incorporating an additional sample incubation and photoactivation cycle. PMAxx-RT-qPCR optimization assays were conducted targeting HuNoV GI, HuNoV GII, and RV in influent wastewater samples exposed or not to thermal inactivation at 99 °C for 5 min. Statistical analyses were performed using GraphPad Prism version 5.0 (GraphPad, La Jolla, CA, USA). Data were checked for normality distribution using the Shapiro–Wilk normality test. Non-parametric tests, such as the Kruskal–Wallis test with Dunn’s multiple comparisons post-test and Spearman ρ coefficient non-parametric correlation test, were used to compare viral loads between influent wastewater and reclaimed water, assess distribution of enteric viruses, and determine the correlation between viral titers. A t -test was used to analyze differences in viral removal after capsid integrity treatment. The significance level was set at a p -value cut-off of 0.05. 3.1. Prevalence of Enteric Viruses, crAssphage, and Somatic Coliphages in Influent Wastewater and Reclaimed Water Samples The relevance of water as a vector of viral diseases has been known for decades; however, due to climate change and water scarcity, reclamation of wastewater is of the utmost importance. Thus, in this study, influent wastewater and reclaimed water were analyzed over 10 months to determine the presence of HuNoV GI, HuNoV GII, and RV together with recent proposed viral fecal contamination indicator, crAssphage and total somatic coliphages . The recovery of the process control, MgV, ranged from 8.08% to 63.64% for influent wastewater samples and from 11.72% to 99.20% for reclaimed water samples . Thus, the obtained results were validated based on the criteria outlined in ISO 15216-1:2017 , where a recovery control of ≥1% is required. Considering the characteristics of the samples and the study’s objectives, viral titers were not adjusted based on the recovery of the process control, as back-calculation is not recommended . The average viral concentrations in influent wastewater (n = 30) were 4.11 ± 0.62 (26/30), 7.87 ± 0.97 (30/30), and 8.11 ± 1.31 (27/30) Log 10 gc/L for HuNoV GI, HuNoV GII and RV, respectively . Haramoto and collaborators summarized the average concentrations of HuNoV GI, HuNoV GII, and RV in different environmental water samples, and our results are consistent with those findings, except for RV, for which we recorded higher levels. Additionally, these findings align with those reported by Stobnicka and collaborators , where HuNoV GII was the most prevalent enteric virus, followed by HuNoV GI and RV. Similar results have also been reported by other authors , showing a higher concentration of RV followed by HuNoV GII and HuNoV GI in influent wastewater. However, Randazzo et al. described lower levels for RV (5.41–6.52 Log PCR units (PCRU)/L). There are few studies that have analyzed the distribution of enteric viruses in environmental samples over long periods of time , and particularly in sewage . The viral concentrations obtained over ten months and distributed across the study by season are represented in . In influent wastewater, statistically higher levels of HuNoV GII were observed during the fall season ( p < 0.05). These trends align with previously findings that also reported higher levels of HuNoV GI and GII in the cold months (October–March), with HuNoV GII being more prevalent than HuNoV GI . However, considering the duration of this study, the term seasonality may not be fully applicable. To accurately assess the impact of climate on the distribution of enteric viruses in environmental samples, more extensive and longer-term studies, spanning at least three years, are deemed necessary. Regarding viral fecal indicators, crAssphage showed the highest concentrations, which ranged from 5.71 to 9.67 Log 10 gc/L (30/30) in influent wastewater. Wu et al. reported values ranging from 7.20 to 8.96 Log 10 gc/L on influent wastewater, which aligns with concentrations reported in other studies from Italy, US, and Japan . The concentration of crAssphage in influent wastewater can reach levels up to 10 Log 10 gc/L , although it may vary depending on factors such as urbanization level, population served by WWTP, available infrastructures, climate conditions, and the impact of diet on the gut microbiome . In parallel, somatic coliphages were monitored by plate count, and the results showed mean concentrations of 5.36 ± 0.79 Log 10 plaque-forming units (pfu)/L (30/30) in influent wastewater. However, in a recent review , somatic coliphages were found at higher levels, with an average of 7.26 ± 0.50 Log 10 pfu/L. Additionally, in a study conducted on influent wastewater across the United States, the average of somatic coliphages was 5.61 ± 0.91 Log 10 pfu/L. In general, influent wastewater is known to present a high prevalence of human enteric viruses . Considering the current climate change situation and the challenge of water scarcity, it is important to treat and regenerate these waters for various purposes . At the international level, there are different regulations proposing acceptable removal targets for the correct reuse of wastewater . Bacterial indicator counts are generally used, but monitoring of viral indicators is typically not required, though virus removal rates are often prescribed by treatment requirements for system design . The most recent European legislation 2020/741 sets minimum requirements for wastewater reuse, specifically requiring a ≥6 Log 10 reduction in rotavirus and coliphages. This legislation also emphasizes the need to validate monitoring programs as a barrier to virus transmission in reclaimed water used for agricultural irrigation . In reclaimed water samples (n = 30), the most prevalent virus, RV, was detected with average concentrations of 7.05 ± 0.61 Log 10 gc/L (30/30). Additionally, HuNoV GI and HuNoV GII were found in reclaimed waters at levels of 3.23 ± 0.46 (20/30) and 6.83 ± 0.60 (17/30) Log 10 gc/L, respectively . Overall, the HuNoV GI and HuNoV GII concentration in reclaimed water reported in this study was higher than those previously reported . While Randazzo and collaborators reported RV levels (<5.51 Log PCRU/L) lower than those reported in our study. CrAssphage is consistently present and has been reported in waters that receive human fecal pollution . All reclaimed water samples tested positive for crAssphage by qPCR, with levels ranging from 4.53 to 8.26 Log 10 gc/L (30/30). These levels are similar to those previously described . The presence of somatic coliphages in reclaimed water was analyzed to verify compliance with legislative reduction requirements and to assess their correlation with the presence of human enteric viruses, as the detection of somatic coliphages in reclaimed water may serve as an indicator of the presence of enteric viruses or the efficacy of their elimination. After the wastewater treatment, the mean removal of somatic coliphages was 3.18 ± 1.74 Log 10 pfu/L . Values provided in a recent review showed a reduction in somatic coliphages levels in European WWTPs of 2.32 ± 0.42 Log 10 pfu/L, being significantly lower than the results obtained in our study. The study conducted by Worley-Morse et al. , carried out in United States, showed an initial mean reduction in somatic coliphages in primary treatment of 0.4 Log 10 pfu/L. In secondary treatment, reductions ranged from 0.06 to 3 Log 10 pfu/L, relative to initial somatic coliphages levels of 6.2 ± 0.49 Log pfu/L. While the reduction in coliphages reported in our study did not meet legislative specifications, it is noteworthy that coliphages were the only analyzed viruses to achieve complete reduction in 40% of the reclaimed water samples . None of the studied enteric viruses or crAssphage achieved the required reduction after the wastewater treatment , indicating a low efficacy in virus removal by the analyzed WWTP. The mean Log 10 removals were 0.96 ± 0.72, 2.29 ± 0.95, 1.03 ± 0.60, and 3.18 ± 1.34 gc/L for HuNoV GI, HuNoV GII, RV, and crAssphage, respectively . It is important to note that, while infectivity cannot be directly inferred from (RT)-qPCR detection, the observed combination of factors warrants caution in the reuse of these waters. Considering the levels of somatic coliphages and the high concentration of enteric viruses recorded in the reclaimed water samples of our study, it is advisable to reject these reclaimed waters and consider them unsuitable for agricultural irrigation. 3.2. Correlation among Enteric Viruses and Viral Indicators in Reclaimed Water Fecal indicator bacteria have been proven to not accurately reflect viral risk to human health as they do for pathogenic bacteria . CrAssphage, which has lately been raised as a novel fecal marker, has been suggested as a new viral indicator in wastewater samples analyses . The presence of crAssphage indicates fecal contamination from human or animal sources. Increased levels of crAssphage within reclaimed water heighten the probability of pathogenic viruses. Recent studies have also shown crAssphage to be a robust indicator of fecal contamination in the environment and in different water matrices . However, the correlation between crAssphage and the presence of human viral pathogens is not clear and further research is needed. In our study, a strong positive correlation (n = 30) of crAssphage with HuNoV GII (ρ = 0.86, p = 0.01) and a moderate correlation with RV (ρ = 0.62, p = 0.06) was observed in reclaimed water analyzed by (RT)-qPCR. The same correlation test was performed with reclaimed water samples positive for somatic coliphages and did not show any correlation in conjunction with enteric viruses . 3.3. Assessing Viral Infectivity in Influent Wastewater and Reclaimed Water by PMAxx-RT-qPCR To avoid overestimating the risk of inactivated viruses by the use of molecular techniques, a capsid integrity assay was conducted. PCR-based monitoring of enteric viruses in reclaimed water can be a sensitive and specific tool for assessing compliance with European legislation. However, molecular-based methods can detect both infectious and non-infectious viruses, which may overestimate the risk associated with reclaimed water . Traditional cell-culture methods for assessing viral infectivity in water samples have faced challenges , leading to the development of new methods based on capsid integrity using viability markers. These methods have shown promising results for evaluating the infectivity of enteric, mainly HuNoV and hepatitis A virus, and respiratory viruses in wastewater and other matrices . Capsid integrity, among other capsid integrity methods, is a valid and robust indicator of virus infectivity and can enhance risk assessment in monitoring programs . This study provides additional insights into the optimal conditions for quantifying intact capsid enteric viruses in influent wastewater and reclaimed waters, particularly for RV, for which such novel optimized methods have not been validated previously. To validate the PMAxx-RT-qPCR protocol, different dilutions of influent wastewater were conducted and were tested for HuNoV GI, HuNoV GII, and RV presence to achieve the best performance. However, the signal was not efficiently reduced after inactivation at 99 °C together with PMAxx treatment. Thus, simple photoactivation was not sufficient to evaluate the potential infectivity of HuNoV GI, HuNoV GII, and RV in these types of samples. It is known that various factors (concentration and dye intercalating conditions, matrix, among others) can prevent a proper photoactivation of PMAxx affecting signal reduction in inactivated and treated samples . Therefore, diluted influent wastewater and reclaimed water samples in PBS (5-fold and 2-fold, respectively) were subjected to double photoactivation, after the thermal inactivation step, and the signal of the samples treated with PMAxx was completely reduced. In all cases, a negative process control was used ( and ). The presence of potentially infectious viruses was tested in a subset (n = 18) of influent wastewater and reclaimed water samples using the optimized PMAxx-RT-qPCR method for RV and HuNoV . The evaluation of influent wastewater and reclaimed water samples over the course of the study using the PMAxx-RT-qPCR method revealed the presence of potentially infectious HuNoV GI, HuNoV GII, and RV . After performing the capsid integrity (RT)-qPCR with optimized conditions, the cycle threshold (Ct) is shown in . Our results indicate that 89% of influent wastewater treated with the optimized PMAxx protocol (n = 9) tested positive for HuNoV GI, and 100% tested positive for HuNoV GII, with an average concentration of 4.59 ± 0.32 Log 10 gc/L (8/9) and 7.46 ± 0.50 Log 10 gc/L (9/9). RV was present in 67% of influent wastewater samples analyzed both with and without the optimized PMAxx protocol, with higher mean levels compared to the other two viruses, at 8.12 ± 0.25 Log 10 gc/L (6/9). In positive reclaimed water samples treated with the optimized PMAxx protocol (n = 9), HuNoV GI was detected in 67% of samples with average concentrations of 3.82 ± 0.52 Log 10 gc/L (6/9), while HuNoV GII was only detected in one replicate of all the analyzed samples, with a concentration of 5.94 Log 10 gc/L (1/9). Additionally, RV was detected in 78% of the samples with concentrations of 6.69 ± 0.48 Log 10 gc/L (7/9). Results obtained after the capsid integrity assay suggest the potential spread of infectious viruses through the environment by positive reclaimed waters. A high prevalence of HuNoV GI, GII and RV has been consistently reported in influent wastewater despite yearly fluctuations . After reclamation treatments, enteric viruses demonstrate a significant reduction with an expected average decrease of 1 to 1.5 Log 10 due to conventional secondary activated sludge treatment . However, removal rates vary considerably based on the treatment facility . In our study, the detection limit of each virus was used to perform the analyses in cases of total reduction among paired samples. The reduction in influent wastewater and reclaimed water samples mean levels using PMAxx-RT-qPCR results of HuNoV GI was 1.39 ± 0.51 Log 10 gc/L, while HuNoV GII was detected in only one effluent sample with reduction of 3.06 ± 0.45 Log 10 gc/L, being the enteric virus with greatest removal. Kevill et al. reported values showing a similar trend to our results for HuNoV before conducting a PMAxx-RT-qPCR; however, in their case, the reductions observed for HuNoV GII were lower than those observed in our study. RV mean level reduction after the reclamation treatment was 1.29 ± 0.29 Log 10 . The results of removal obtained by capsid integrity assay show statistically significant differences ( p < 0.05) compared to those obtained from (RT)-qPCR for HuNoV GII and RV, except for HuNoV GI. This approach enables the estimation of disinfection treatment effectiveness and the risk of pathogens spreading through wastewater reuse. This fact contributes to the knowledge of HuNoV GI presenting higher resistance at reclamation and disinfection processes than HuNoV GII , having greater prevalence and stability in the environment, and therefore being more associated with water-related outbreaks and the possibility of crop contamination. Unlike HuNoV GI, HuNoV GII is generally linked to food-related outbreaks, mainly due to food handling and its lower resistance to reclamation treatments . RV is remarkably resistant to the reclamation process, being transmitted through contaminated water among other infection pathways and being able to survive for long periods in the environment . HuNoV GI has been reported in a high number of vegetable and fruit-associated outbreaks . RV has also been detected in raw vegetables, although not as frequently as HuNoV GI . Furthermore, RV has been identified as being linked to the post-harvest use of water . However, the risk posed by RV contamination of fresh vegetables is not well understood . The higher prevalence of HuNoV GI and RV in sewage indicates that reclaimed water is the probable source of fresh vegetable contamination . Thus, determining the available water source quality may prevent the contamination of fresh vegetables during pre-harvest stage via irrigation and throughout the food production chain. The low infectious dose of enteric viruses and their ability to remain infectious under certain conditions entails the subsequent exposure of consumers to potentially infectious HuNoV and RV by consuming fresh and uncooked vegetables. According to Regulation (EU) 2020/741 and considering the detection of viruses by PMAxx-RT-qPCR, the reclaimed waters analyzed in this study should not be used for agricultural purposes. The relevance of water as a vector of viral diseases has been known for decades; however, due to climate change and water scarcity, reclamation of wastewater is of the utmost importance. Thus, in this study, influent wastewater and reclaimed water were analyzed over 10 months to determine the presence of HuNoV GI, HuNoV GII, and RV together with recent proposed viral fecal contamination indicator, crAssphage and total somatic coliphages . The recovery of the process control, MgV, ranged from 8.08% to 63.64% for influent wastewater samples and from 11.72% to 99.20% for reclaimed water samples . Thus, the obtained results were validated based on the criteria outlined in ISO 15216-1:2017 , where a recovery control of ≥1% is required. Considering the characteristics of the samples and the study’s objectives, viral titers were not adjusted based on the recovery of the process control, as back-calculation is not recommended . The average viral concentrations in influent wastewater (n = 30) were 4.11 ± 0.62 (26/30), 7.87 ± 0.97 (30/30), and 8.11 ± 1.31 (27/30) Log 10 gc/L for HuNoV GI, HuNoV GII and RV, respectively . Haramoto and collaborators summarized the average concentrations of HuNoV GI, HuNoV GII, and RV in different environmental water samples, and our results are consistent with those findings, except for RV, for which we recorded higher levels. Additionally, these findings align with those reported by Stobnicka and collaborators , where HuNoV GII was the most prevalent enteric virus, followed by HuNoV GI and RV. Similar results have also been reported by other authors , showing a higher concentration of RV followed by HuNoV GII and HuNoV GI in influent wastewater. However, Randazzo et al. described lower levels for RV (5.41–6.52 Log PCR units (PCRU)/L). There are few studies that have analyzed the distribution of enteric viruses in environmental samples over long periods of time , and particularly in sewage . The viral concentrations obtained over ten months and distributed across the study by season are represented in . In influent wastewater, statistically higher levels of HuNoV GII were observed during the fall season ( p < 0.05). These trends align with previously findings that also reported higher levels of HuNoV GI and GII in the cold months (October–March), with HuNoV GII being more prevalent than HuNoV GI . However, considering the duration of this study, the term seasonality may not be fully applicable. To accurately assess the impact of climate on the distribution of enteric viruses in environmental samples, more extensive and longer-term studies, spanning at least three years, are deemed necessary. Regarding viral fecal indicators, crAssphage showed the highest concentrations, which ranged from 5.71 to 9.67 Log 10 gc/L (30/30) in influent wastewater. Wu et al. reported values ranging from 7.20 to 8.96 Log 10 gc/L on influent wastewater, which aligns with concentrations reported in other studies from Italy, US, and Japan . The concentration of crAssphage in influent wastewater can reach levels up to 10 Log 10 gc/L , although it may vary depending on factors such as urbanization level, population served by WWTP, available infrastructures, climate conditions, and the impact of diet on the gut microbiome . In parallel, somatic coliphages were monitored by plate count, and the results showed mean concentrations of 5.36 ± 0.79 Log 10 plaque-forming units (pfu)/L (30/30) in influent wastewater. However, in a recent review , somatic coliphages were found at higher levels, with an average of 7.26 ± 0.50 Log 10 pfu/L. Additionally, in a study conducted on influent wastewater across the United States, the average of somatic coliphages was 5.61 ± 0.91 Log 10 pfu/L. In general, influent wastewater is known to present a high prevalence of human enteric viruses . Considering the current climate change situation and the challenge of water scarcity, it is important to treat and regenerate these waters for various purposes . At the international level, there are different regulations proposing acceptable removal targets for the correct reuse of wastewater . Bacterial indicator counts are generally used, but monitoring of viral indicators is typically not required, though virus removal rates are often prescribed by treatment requirements for system design . The most recent European legislation 2020/741 sets minimum requirements for wastewater reuse, specifically requiring a ≥6 Log 10 reduction in rotavirus and coliphages. This legislation also emphasizes the need to validate monitoring programs as a barrier to virus transmission in reclaimed water used for agricultural irrigation . In reclaimed water samples (n = 30), the most prevalent virus, RV, was detected with average concentrations of 7.05 ± 0.61 Log 10 gc/L (30/30). Additionally, HuNoV GI and HuNoV GII were found in reclaimed waters at levels of 3.23 ± 0.46 (20/30) and 6.83 ± 0.60 (17/30) Log 10 gc/L, respectively . Overall, the HuNoV GI and HuNoV GII concentration in reclaimed water reported in this study was higher than those previously reported . While Randazzo and collaborators reported RV levels (<5.51 Log PCRU/L) lower than those reported in our study. CrAssphage is consistently present and has been reported in waters that receive human fecal pollution . All reclaimed water samples tested positive for crAssphage by qPCR, with levels ranging from 4.53 to 8.26 Log 10 gc/L (30/30). These levels are similar to those previously described . The presence of somatic coliphages in reclaimed water was analyzed to verify compliance with legislative reduction requirements and to assess their correlation with the presence of human enteric viruses, as the detection of somatic coliphages in reclaimed water may serve as an indicator of the presence of enteric viruses or the efficacy of their elimination. After the wastewater treatment, the mean removal of somatic coliphages was 3.18 ± 1.74 Log 10 pfu/L . Values provided in a recent review showed a reduction in somatic coliphages levels in European WWTPs of 2.32 ± 0.42 Log 10 pfu/L, being significantly lower than the results obtained in our study. The study conducted by Worley-Morse et al. , carried out in United States, showed an initial mean reduction in somatic coliphages in primary treatment of 0.4 Log 10 pfu/L. In secondary treatment, reductions ranged from 0.06 to 3 Log 10 pfu/L, relative to initial somatic coliphages levels of 6.2 ± 0.49 Log pfu/L. While the reduction in coliphages reported in our study did not meet legislative specifications, it is noteworthy that coliphages were the only analyzed viruses to achieve complete reduction in 40% of the reclaimed water samples . None of the studied enteric viruses or crAssphage achieved the required reduction after the wastewater treatment , indicating a low efficacy in virus removal by the analyzed WWTP. The mean Log 10 removals were 0.96 ± 0.72, 2.29 ± 0.95, 1.03 ± 0.60, and 3.18 ± 1.34 gc/L for HuNoV GI, HuNoV GII, RV, and crAssphage, respectively . It is important to note that, while infectivity cannot be directly inferred from (RT)-qPCR detection, the observed combination of factors warrants caution in the reuse of these waters. Considering the levels of somatic coliphages and the high concentration of enteric viruses recorded in the reclaimed water samples of our study, it is advisable to reject these reclaimed waters and consider them unsuitable for agricultural irrigation. Fecal indicator bacteria have been proven to not accurately reflect viral risk to human health as they do for pathogenic bacteria . CrAssphage, which has lately been raised as a novel fecal marker, has been suggested as a new viral indicator in wastewater samples analyses . The presence of crAssphage indicates fecal contamination from human or animal sources. Increased levels of crAssphage within reclaimed water heighten the probability of pathogenic viruses. Recent studies have also shown crAssphage to be a robust indicator of fecal contamination in the environment and in different water matrices . However, the correlation between crAssphage and the presence of human viral pathogens is not clear and further research is needed. In our study, a strong positive correlation (n = 30) of crAssphage with HuNoV GII (ρ = 0.86, p = 0.01) and a moderate correlation with RV (ρ = 0.62, p = 0.06) was observed in reclaimed water analyzed by (RT)-qPCR. The same correlation test was performed with reclaimed water samples positive for somatic coliphages and did not show any correlation in conjunction with enteric viruses . To avoid overestimating the risk of inactivated viruses by the use of molecular techniques, a capsid integrity assay was conducted. PCR-based monitoring of enteric viruses in reclaimed water can be a sensitive and specific tool for assessing compliance with European legislation. However, molecular-based methods can detect both infectious and non-infectious viruses, which may overestimate the risk associated with reclaimed water . Traditional cell-culture methods for assessing viral infectivity in water samples have faced challenges , leading to the development of new methods based on capsid integrity using viability markers. These methods have shown promising results for evaluating the infectivity of enteric, mainly HuNoV and hepatitis A virus, and respiratory viruses in wastewater and other matrices . Capsid integrity, among other capsid integrity methods, is a valid and robust indicator of virus infectivity and can enhance risk assessment in monitoring programs . This study provides additional insights into the optimal conditions for quantifying intact capsid enteric viruses in influent wastewater and reclaimed waters, particularly for RV, for which such novel optimized methods have not been validated previously. To validate the PMAxx-RT-qPCR protocol, different dilutions of influent wastewater were conducted and were tested for HuNoV GI, HuNoV GII, and RV presence to achieve the best performance. However, the signal was not efficiently reduced after inactivation at 99 °C together with PMAxx treatment. Thus, simple photoactivation was not sufficient to evaluate the potential infectivity of HuNoV GI, HuNoV GII, and RV in these types of samples. It is known that various factors (concentration and dye intercalating conditions, matrix, among others) can prevent a proper photoactivation of PMAxx affecting signal reduction in inactivated and treated samples . Therefore, diluted influent wastewater and reclaimed water samples in PBS (5-fold and 2-fold, respectively) were subjected to double photoactivation, after the thermal inactivation step, and the signal of the samples treated with PMAxx was completely reduced. In all cases, a negative process control was used ( and ). The presence of potentially infectious viruses was tested in a subset (n = 18) of influent wastewater and reclaimed water samples using the optimized PMAxx-RT-qPCR method for RV and HuNoV . The evaluation of influent wastewater and reclaimed water samples over the course of the study using the PMAxx-RT-qPCR method revealed the presence of potentially infectious HuNoV GI, HuNoV GII, and RV . After performing the capsid integrity (RT)-qPCR with optimized conditions, the cycle threshold (Ct) is shown in . Our results indicate that 89% of influent wastewater treated with the optimized PMAxx protocol (n = 9) tested positive for HuNoV GI, and 100% tested positive for HuNoV GII, with an average concentration of 4.59 ± 0.32 Log 10 gc/L (8/9) and 7.46 ± 0.50 Log 10 gc/L (9/9). RV was present in 67% of influent wastewater samples analyzed both with and without the optimized PMAxx protocol, with higher mean levels compared to the other two viruses, at 8.12 ± 0.25 Log 10 gc/L (6/9). In positive reclaimed water samples treated with the optimized PMAxx protocol (n = 9), HuNoV GI was detected in 67% of samples with average concentrations of 3.82 ± 0.52 Log 10 gc/L (6/9), while HuNoV GII was only detected in one replicate of all the analyzed samples, with a concentration of 5.94 Log 10 gc/L (1/9). Additionally, RV was detected in 78% of the samples with concentrations of 6.69 ± 0.48 Log 10 gc/L (7/9). Results obtained after the capsid integrity assay suggest the potential spread of infectious viruses through the environment by positive reclaimed waters. A high prevalence of HuNoV GI, GII and RV has been consistently reported in influent wastewater despite yearly fluctuations . After reclamation treatments, enteric viruses demonstrate a significant reduction with an expected average decrease of 1 to 1.5 Log 10 due to conventional secondary activated sludge treatment . However, removal rates vary considerably based on the treatment facility . In our study, the detection limit of each virus was used to perform the analyses in cases of total reduction among paired samples. The reduction in influent wastewater and reclaimed water samples mean levels using PMAxx-RT-qPCR results of HuNoV GI was 1.39 ± 0.51 Log 10 gc/L, while HuNoV GII was detected in only one effluent sample with reduction of 3.06 ± 0.45 Log 10 gc/L, being the enteric virus with greatest removal. Kevill et al. reported values showing a similar trend to our results for HuNoV before conducting a PMAxx-RT-qPCR; however, in their case, the reductions observed for HuNoV GII were lower than those observed in our study. RV mean level reduction after the reclamation treatment was 1.29 ± 0.29 Log 10 . The results of removal obtained by capsid integrity assay show statistically significant differences ( p < 0.05) compared to those obtained from (RT)-qPCR for HuNoV GII and RV, except for HuNoV GI. This approach enables the estimation of disinfection treatment effectiveness and the risk of pathogens spreading through wastewater reuse. This fact contributes to the knowledge of HuNoV GI presenting higher resistance at reclamation and disinfection processes than HuNoV GII , having greater prevalence and stability in the environment, and therefore being more associated with water-related outbreaks and the possibility of crop contamination. Unlike HuNoV GI, HuNoV GII is generally linked to food-related outbreaks, mainly due to food handling and its lower resistance to reclamation treatments . RV is remarkably resistant to the reclamation process, being transmitted through contaminated water among other infection pathways and being able to survive for long periods in the environment . HuNoV GI has been reported in a high number of vegetable and fruit-associated outbreaks . RV has also been detected in raw vegetables, although not as frequently as HuNoV GI . Furthermore, RV has been identified as being linked to the post-harvest use of water . However, the risk posed by RV contamination of fresh vegetables is not well understood . The higher prevalence of HuNoV GI and RV in sewage indicates that reclaimed water is the probable source of fresh vegetable contamination . Thus, determining the available water source quality may prevent the contamination of fresh vegetables during pre-harvest stage via irrigation and throughout the food production chain. The low infectious dose of enteric viruses and their ability to remain infectious under certain conditions entails the subsequent exposure of consumers to potentially infectious HuNoV and RV by consuming fresh and uncooked vegetables. According to Regulation (EU) 2020/741 and considering the detection of viruses by PMAxx-RT-qPCR, the reclaimed waters analyzed in this study should not be used for agricultural purposes. In this study the monitoring of enteric viruses and crAssphage was conducted over 10 months on both influent wastewater and reclaimed water samples by (RT)-qPCR. Furthermore, an optimized capsid integrity assay was applied by using the intercalating dye PMAxx. Additionally, somatic coliphages counting was assessed and their absence in reclaimed water samples did not correlate with the removal of potential infectious viral particles. The optimization of PMAxx-RT-qPCR method served as a useful tool to check capsid integrity and address potential infectivity of enteric viruses in both influent wastewater and reclaimed water. This study provides insights to better understand the presence and potential infectivity of enteric viruses, particularly for RV, in reclaimed waters intended for agricultural purposes. Nevertheless, capsid integrity assays do not guarantee the infectivity of the samples; therefore, future research needs to focus on comparative studies between molecular assays and viral cell culture on environmental samples.
Comparison of three spot proteinuria measurements for pediatric nephrotic syndrome: based on the International pediatric Nephrology Association 2022 Guidelines
90462b69-a2bf-40ae-a6e3-2b980dfb037d
10512887
Internal Medicine[mh]
Introduction The average incidence of pediatric nephrotic syndrome (NS) is 1–17 per 100,000, varying by ethnicity and region . Over 85% of cases of childhood NS respond to steroid therapy, while 10–15% remain unresponsive or develop steroid resistance . NS, particularly steroid-resistant NS, can progress into end-stage kidney disease (ESKD) in an average of 6–191 months , contributing to as many as 10%–28% of the primary diagnoses of ESKD . NS requires routine monitoring, primarily of proteinuria. In the United States, pediatric NS has a median annual cost of $140 (IQR $40–$1000) for laboratory testing . Low self-confidence, distress, and frequent absences from school for hospital visits due to pediatric NS lower the quality of life of 94% of patients, resulting in lower social and educational attainment compared with healthy children . Pediatric NS is diagnosed by nephrotic-range proteinuria, measured by first-morning, 24-h, or dipstick proteinuria . The 24-h urine protein (24-h UP) method is the gold standard, but it is cumbersome in the outpatient setting and impractical in children, particularly when they are not toilet trained . Yang et al. (2017) excluded approximately 30% of pediatric 24-h UP samples from their analysis due to inadequate urine collection . Second to 24-h UP, first-morning urine collection is preferred due to lack of biological variation . However, this collection method requires specific conditions, such as 4–8 hours of sleep to ensure the absence of hydration and physical activity . To prevent protein degradation, storage of urine samples at 2–8 °C is recommended (or at room temperature for 2–4 h after collection) . Therefore, it is difficult to obtain the first-morning urine in an outpatient setting . Several studies have reported that second-morning urine samples taken at 8–10 AM are comparable to 24-h and first-morning urine samples . Position statements from Kidney Disease Improving Global Outcomes (KDIGO), National Kidney Foundation, National Institute for Clinical Excellence, and Caring for Australians with Renal Impairments suggest that although first-morning urine is preferred, random urine collection may be used if first-morning collection is inconvenient ( Table S1 ) . Random spot urine measurements, such as spot urinary protein creatinine ratio (UPCR) and urine dipstick tests, are semiquantitative methods widely used as initial screening tools for proteinuria due to their low cost, wide availability, and efficiency . Several studies have reported conflicting correlations between 24-h UP and random spot urine measurements. Studies have shown strong correlations between urine dipstick test values and 24-h UP excretion in patients with nephropathy ( r = 0.75) , as well as between UPCR and cumulative 24-h UP in children ( r = 0.801, p < 0.001) and in adults ( r = 0.98, p < 0.05) , due to the stability of urinary creatinine and protein excretion rates throughout the day . One study reported only a moderate correlation between UPCR and 24-h UP in children ( r = 0.67) and in adults ( r = 0.60) . Therefore, whether UPCR is an equivalent predictor of kidney outcomes and a reliable replacement for 24-h UP remains unclear . To our knowledge, the current study is the first to test the role of spot UPCR and manual and automated dipstick values in diagnosing new cases, remission, and no remission/relapse of pediatric NS, based on the International Pediatric Nephrology Association (IPNA) 2022 and KDIGO 2021 Guidelines. Most studies assessing urine dipstick use in kidney disease have focused on albuminuria in adults with diabetes and/or chronic kidney disease. The diagnostic value of random dipstick proteinuria compared to UPCR and 24-h UP in children with established NS diagnoses was first reported in 1990 . However, the UPCR proteinuria cutoff then specified was lower than the current standard of nephrotic-range proteinuria, recommended by KDIGO in 2012 (> 1.0 g/g vs. ≥ 2.0 g/g, respectively) . IPNA and KDIGO defined cutoffs of 0.2 mg/mg for complete remission and 2.0 mg/mg for relapse in pediatric NS, based on expert consensus . Consensus opinions were developed based on uncontrolled series of children or uncontrolled trials in adults . One pediatric study assessing the proteinuria cutoff included a broad diagnostic criterion, classifying the proteinuria as tubulointerstitial or glomerular, but the cutoff was not exclusively for NS . Material and methods 2.1. Ethics approval The Research Ethics Committee of the Faculty of Medicine Universitas Indonesia, Cipto Mangunkusumo Hospital approved this study (number KET-1187/UN2.F1/ETIK/PPM.00.02/2020). Written informed consent was obtained from participants’ legal guardians or, when appropriate, the participants. The study followed the Helsinki Declaration. 2.2. Study subjects Data were obtained from children aged 3–18 years admitted to the pediatric nephrology outpatient clinic or pediatric ward of the Cipto Mangunkusumo Hospital from 1 January 2021 to 31 December 2021. Ninety-two NS patients were included. They had received an initial diagnosis, had relapsed, and had received treatment but had not achieved complete remission. NS was diagnosed based on the IPNA guidelines . Only patients with eGFR ≥ 60 mL/min per 1.73 m 2 were included. Participants who had severe malnutrition or could not complete the 24-h UP test were excluded. 2.3. Urine tests Participants were asked to provide 24-h urine samples by discarding their first-morning urine and then collecting each subsequent urination through the first-morning urination of the next day. Next, a morning urine sample, collected before 9 AM, was submitted for protein (mg/dL) and creatinine (mg/dL) estimation and dipstick tests (see limitations in the Discussion section). The 24-h UP, UPCR, and dipstick results were compared. UPCR and 24-h UP were analyzed via the ARCHITECT c8000 automated analyzer machine (Abbott USA) using the Jaffe enzymatic and colorimetric methods, respectively. The Siemens CLINITEK Advantus® urine chemistry analyzer performed automated urine analysis. Manual urine dipstick tests, which used Verify™ urinalysis reagent strips, were visually interpreted by a blinded doctor as negative, trace, 1+, 2+, or 3+ . 2.4. Statistical analysis Parameters were investigated using STATA 17.0. Data are expressed as median with interquartile range (IQR). Nonparametric analysis was performed, as the Kolmogorov–Smirnov test indicated non-normally distributed data. p < 0.05 was considered statistically significant. Cutoffs were determined by selecting the value yielding the highest Youden index, calculated as sensitivity-(1-specificity). Correlations between 24-h UP and the UPCR and dipstick test results were calculated as Spearman’s correlation coefficient. Agreement between automated and manual urinary dipstick tests was measured using the Kappa (κ) statistic. Our sample size had 92.5% power to evaluate the specificity of UPCR in diagnosing NS with no remission/relapse. Ethics approval The Research Ethics Committee of the Faculty of Medicine Universitas Indonesia, Cipto Mangunkusumo Hospital approved this study (number KET-1187/UN2.F1/ETIK/PPM.00.02/2020). Written informed consent was obtained from participants’ legal guardians or, when appropriate, the participants. The study followed the Helsinki Declaration. Study subjects Data were obtained from children aged 3–18 years admitted to the pediatric nephrology outpatient clinic or pediatric ward of the Cipto Mangunkusumo Hospital from 1 January 2021 to 31 December 2021. Ninety-two NS patients were included. They had received an initial diagnosis, had relapsed, and had received treatment but had not achieved complete remission. NS was diagnosed based on the IPNA guidelines . Only patients with eGFR ≥ 60 mL/min per 1.73 m 2 were included. Participants who had severe malnutrition or could not complete the 24-h UP test were excluded. Urine tests Participants were asked to provide 24-h urine samples by discarding their first-morning urine and then collecting each subsequent urination through the first-morning urination of the next day. Next, a morning urine sample, collected before 9 AM, was submitted for protein (mg/dL) and creatinine (mg/dL) estimation and dipstick tests (see limitations in the Discussion section). The 24-h UP, UPCR, and dipstick results were compared. UPCR and 24-h UP were analyzed via the ARCHITECT c8000 automated analyzer machine (Abbott USA) using the Jaffe enzymatic and colorimetric methods, respectively. The Siemens CLINITEK Advantus® urine chemistry analyzer performed automated urine analysis. Manual urine dipstick tests, which used Verify™ urinalysis reagent strips, were visually interpreted by a blinded doctor as negative, trace, 1+, 2+, or 3+ . Statistical analysis Parameters were investigated using STATA 17.0. Data are expressed as median with interquartile range (IQR). Nonparametric analysis was performed, as the Kolmogorov–Smirnov test indicated non-normally distributed data. p < 0.05 was considered statistically significant. Cutoffs were determined by selecting the value yielding the highest Youden index, calculated as sensitivity-(1-specificity). Correlations between 24-h UP and the UPCR and dipstick test results were calculated as Spearman’s correlation coefficient. Agreement between automated and manual urinary dipstick tests was measured using the Kappa (κ) statistic. Our sample size had 92.5% power to evaluate the specificity of UPCR in diagnosing NS with no remission/relapse. Results 3.1. Patient characteristics Ninety-two pediatric NS patients participated. Their median age was 10.04 years (IQR 6.54–12.96). Most participants were male (63.04%), and 81.52% were diagnosed with steroid-resistant NS (SRNS). The rest were steroid-dependent NS (SDNS, 14.13%), in primary remission (2.17%) or had only received an initial diagnosis (2.17%) . 3.2. Urinary protein creatinine ratio, automated dipstick, and manual dipstick tests Spearman’s correlation coefficients are reported in . UPCR demonstrated a stronger correlation with 24-h UP ( r = 0.83, p < 0.001) than automated ( r = 0.79, p < 0.001) and manual urine dipsticks ( r = 0.78, p < 0.001). The sensitivity, specificity, PPV, and NPV of the three proteinuria measurements for identifying relapse and remission were calculated ( and ). UPCR had the highest sensitivity (95.24%) and specificity (91.55%) for identifying no remission/relapse, while dipstick tests had the highest specificity for identifying complete remission. 3.3. Validity of the urinary protein creatinine ratio The ability to identify optimal cutoff values of UPCR for complete remission and no remission/relapse was analyzed by comparing the highest Youden indices and the areas under the receiver operating characteristic (ROC) curves (AUCs) against the IPNA standard. The optimal UPCR cutoff for identifying no remission/relapse was 2.08 mg/mg (sensitivity = 95.24%, specificity = 91.55%, AUC = 0.93), similar to the 2 mg/mg recommended by the IPNA . For complete remission, the optimal UPCR cutoff was 0.44 mg/mg, which differed from the recommended value of 0.2 mg/mg ( and S2 ). At the 0.44 mg/mg cutoff, sensitivity, specificity, and AUC were 77.97%, 87.88%, and 0.83, respectively, compared with the IPNA recommendations ( and S2 , ). 3.4. Validity of automated and manual dipstick tests The diagnostic abilities of urinary dipstick tests were analyzed to find optimal cutoffs for identifying complete remission and no remission/relapse . The only dipstick cutoff that was compatible with the IPNA recommendation was the manual dipstick result for identifying complete remission, which was indicated by a result of negative or trace ( Tables S4 and S5 ). The manual dipstick test had a slightly higher AUC for identifying no remission/relapse than the automated dipstick test . The automated dipstick test had a slightly higher AUC for identifying complete remission . Manual and automated dipstick readings showed moderate agreement ( k = 0.53, p < 0.001). 3.5. Validity of UPCR, automated dipstick, and manual dipstick in Steroid-Resistant nephrotic syndrome We analyzed the performance of morning spot proteinuria measurements specifically in SRNS, which made up 81.52% of cases in our study . The sensitivity, specificity, PPV, and NPV of the three proteinuria measurements to identify relapse and remission are presented in . UPCR had the highest sensitivity (95.24%) and specificity (94.44%) for identifying no remission/relapse, while dipstick tests had the highest specificity for identifying complete remission. Patient characteristics Ninety-two pediatric NS patients participated. Their median age was 10.04 years (IQR 6.54–12.96). Most participants were male (63.04%), and 81.52% were diagnosed with steroid-resistant NS (SRNS). The rest were steroid-dependent NS (SDNS, 14.13%), in primary remission (2.17%) or had only received an initial diagnosis (2.17%) . Urinary protein creatinine ratio, automated dipstick, and manual dipstick tests Spearman’s correlation coefficients are reported in . UPCR demonstrated a stronger correlation with 24-h UP ( r = 0.83, p < 0.001) than automated ( r = 0.79, p < 0.001) and manual urine dipsticks ( r = 0.78, p < 0.001). The sensitivity, specificity, PPV, and NPV of the three proteinuria measurements for identifying relapse and remission were calculated ( and ). UPCR had the highest sensitivity (95.24%) and specificity (91.55%) for identifying no remission/relapse, while dipstick tests had the highest specificity for identifying complete remission. Validity of the urinary protein creatinine ratio The ability to identify optimal cutoff values of UPCR for complete remission and no remission/relapse was analyzed by comparing the highest Youden indices and the areas under the receiver operating characteristic (ROC) curves (AUCs) against the IPNA standard. The optimal UPCR cutoff for identifying no remission/relapse was 2.08 mg/mg (sensitivity = 95.24%, specificity = 91.55%, AUC = 0.93), similar to the 2 mg/mg recommended by the IPNA . For complete remission, the optimal UPCR cutoff was 0.44 mg/mg, which differed from the recommended value of 0.2 mg/mg ( and S2 ). At the 0.44 mg/mg cutoff, sensitivity, specificity, and AUC were 77.97%, 87.88%, and 0.83, respectively, compared with the IPNA recommendations ( and S2 , ). Validity of automated and manual dipstick tests The diagnostic abilities of urinary dipstick tests were analyzed to find optimal cutoffs for identifying complete remission and no remission/relapse . The only dipstick cutoff that was compatible with the IPNA recommendation was the manual dipstick result for identifying complete remission, which was indicated by a result of negative or trace ( Tables S4 and S5 ). The manual dipstick test had a slightly higher AUC for identifying no remission/relapse than the automated dipstick test . The automated dipstick test had a slightly higher AUC for identifying complete remission . Manual and automated dipstick readings showed moderate agreement ( k = 0.53, p < 0.001). Validity of UPCR, automated dipstick, and manual dipstick in Steroid-Resistant nephrotic syndrome We analyzed the performance of morning spot proteinuria measurements specifically in SRNS, which made up 81.52% of cases in our study . The sensitivity, specificity, PPV, and NPV of the three proteinuria measurements to identify relapse and remission are presented in . UPCR had the highest sensitivity (95.24%) and specificity (94.44%) for identifying no remission/relapse, while dipstick tests had the highest specificity for identifying complete remission. Discussion Our participants were predominantly male, consistent with studies in which 57–65.87% of pediatric NS patients were male . The participants had a median age of 10.04 years, whereas previous pediatric NS studies have reported median ages of 4.5–6.9 years . Most children in our study were well nourished; we excluded children with malnutrition because creatinine levels are generally lower in malnutrition , which could skew the UPCR value. Our study samples had a median urine specific gravity of 1.020 (IQR 1.010–1.025), displaying reliable proteinuria findings since dilute urine may give false-negative results for proteinuria . Most cases in our study were SRNS, while other reports suggest that only 10–15% of pediatric NS cases are SRNS . As a national referral hospital in Indonesia, we receive referral cases from other centers following initial steroid treatment failures. Therefore, the high proportion of SRNS cases and older patient age reflects that most of our cases are atypical, difficult-to-treat NS . UPCR has shown a sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 92%, 78%, 95%, and 70%, respectively, for diagnosing pediatric NS . In pediatric patients with fever and NS, automated urine dipstick test values of 2+ have shown values of 60%, 89%, 43%, and 94%, respectively, for detecting non-nephrotic proteinuria , while values of 3+/4+ have shown values of 90%, 91%, 96%, and 77% for detecting nephrotic-range proteinuria . In our study, UPCR was strongly correlated with 24-h UP . A high correlation between UPCR and 24-h UP is evident in other nephropathies. In pediatric patients with proteinuria due to IgA vasculitis-associated nephritis (IgAVN), lupus nephritis, or primary NS, UPCR and 24-h UP were strongly correlated ( r = 0.869) . In adults with IgAN, UPCR and 24-h UP were also strongly correlated ( r = 0.847, p < 0.001) , as well as in hepatocellular carcinoma patients with lenvatinib-associated proteinuria ( r = 0.86) . A study investigating amyloid light-chain amyloidosis reported a moderate correlation between UPCR and 24-h UP in patients with proteinuria levels of 500–3,000 mg/day ( r = 0.57) or > 3000 mg/day ( r = 0.62) but a strong correlation in lower levels of proteinuria (< 500 mg/day, r = 0.75) . Different correlation strengths with different proteinuria levels have been demonstrated in pediatric glomerulonephritis, with a stronger correlation in non-nephrotic-range proteinuria ( r = 0.806) and a moderate correlation in nephrotic-range proteinuria ( r = 0.586) . The stronger UPCR and 24-h UP correlation in the current study may be caused by the lower range of proteinuria in our participants. UPCR was highly sensitive and specific for identifying a lack of remission/relapse . Zhai et al. reported that UPCR had a sensitivity of 89.9% and specificity of 92.2% for diagnosing nephrotic-range proteinuria . We analyzed the optimal diagnostic boundary to determine remission or relapse using ROC curves ( and ). The cutoff point of UPCR for no remission/relapse was 2.08 mg/mg. Huang et al. reported a similar optimal cutoff of 2.09 mg/mg for pediatric nephrotic-range proteinuria . Our study only involved pediatric NS, which is considered to be characterized by selective albuminuria, while Huang et al. included participants with selective albuminuria due to NS (58.6%) and nonselective albuminuria, such as IgAVN (38.2%), IgAN (1.8%), and lupus nephritis (1.4%) . Nevertheless, both cutoffs were in line with IPNA guidelines. Lane et al. reported a higher optimal cutoff of 2.35 mg/mg for nephrotic-range proteinuria in adult patients in renal and hypertension clinics . To diagnose complete remission, the IPNA guidelines recommend a UPCR cutoff of < 0.2 mg/mg, corresponding to the recommendations of the National Kidney Foundation and American College of Rheumatology (ACR) 2006, which proposed a standard diagnostic boundary for complete remission of kidney diseases . In children with nephropathies, Huang et al. reported an optimal cutoff of 0.18 mg/mg . However, our study concluded that the recommended cutoff of 0.2 mg/mg, although having high sensitivity, had low specificity. UPCR performed best at a cutoff of 0.44 mg/mg . Although the optimal cutoff in our study was higher than the recommendation, a higher cutoff for complete remission might be plausible. Prior research investigating the reference intervals for proteinuria in healthy children has reported upper limits for UPCR above 0.2 mg/mg . In healthy girls and boys, the UPCR values ranged from 0.04–0.34 mg/mg (upper limit 90% CI 0.21–0.43) and 0.03–0.26 mg/mg (upper limit 90% CI 0.11–0.38), respectively . The automated and manual dipstick tests demonstrated similar correlations with 24-h UP . However, both dipstick test values showed a lower correlation than that between UPCR and 24-h UP. Automated and manual dipstick tests demonstrated limited sensitivity but relatively high specificity and similar AUC values for identifying remission or relapse ( and ). Similarly, Gai et al. (2006) compared 24-h UP, UPCR, and automated dipstick tests in adults with nephropathy and reported a stronger correlation between UPCR and 24-h UP ( r = 0.82) than dipstick and 24-h UP ( r = 0.75) . The sensitivity, specificity, and AUC values for the automated dipstick test (49.2%, 93.8%, and 0.778, respectively) were also lower than those for UPCR (91.4%, 75%, and 0.840, respectively) . Furthermore, the automated dipstick test failed to identify pathological proteinuria (≥ 500 mg/day) in 31.6% of the study’s participants . An earlier study investigating pediatric NS reported a sensitivity of 70% and specificity of 68% for dipstick tests in identifying nephrotic-range proteinuria, which is a higher sensitivity but lower specificity than reported in our study . This previous study also reported that the dipstick test performed worse than UPCR, resulting in more proteinuria misclassification . These results are due to automated dipstick tests being semiquantitative assays, while UPCR is purely quantitative. Therefore, dipstick values are less precise and have a lower correlation with 24-h UP. The green hues visible in a dipstick test occur due to a chemical reaction; therefore, false-negative results may occur in significantly diluted urine, while false-positive results may occur in very alkaline or contaminated urine . We analyzed the dipstick tests’ optimal cutoffs for identifying remission and relapse ( , and 7). The automated dipstick test performed best at a cutoff of 1+ for complete remission and 2+ for no remission/relapse. These thresholds differed from the IPNA-recommended cutoff points of negative or trace results for complete remission and 3+ for no remission/relapse. For the manual dipstick test, the result of the trace performed best for identifying complete remission, consistent with the IPNA guidelines. For no remission/relapse, the cutoff was 1+, which was different from the IPNA recommendation of 3 + . A prior study reported that automated dipstick values were unreliable for estimating 24-h UP, being poor predictors of daily urinary protein excretion in adult renal and hypertension clinic patients . A meta-analysis assessing proteinuria in adults with kidney disease and pregnant females reported that an automated dipstick cutoff of 1+ had a sensitivity of 67–100% and specificity of 36–98% for detecting 24-h UP > 300 mg/day . Another study investigating proteinuria in older adults found that the optimal automated dipstick cutoff for UPCR ≥ 0.2 mg/mg was trace (sensitivity = 90.9%, specificity = 87.2%) . Unfortunately, we could not find a study that assessed urine dipstick test cutoff values using the IPNA definition of proteinuria in pediatric NS for equal comparison. We also could not find studies assessing manual dipstick tests for diagnosing pediatric NS. Therefore, we could not compare our results with those of previous studies. Our study has several strengths. Previously published studies assessing proteinuria in children investigated proteinuria for different causes, such as fever, glomerulonephritis, and other nephropathies. Abitbol et al. (2006) suggested that proteinuria and albuminuria profiles are both essential to guide the management of proteinuric diseases in children, including NS . In our study, we investigated only proteinuria in pediatric NS. We compared the validity of urinary automated and manual dipstick tests with the currently accepted gold-standard test, 24-h UP. The manual dipstick test is a practical and helpful screening method for home proteinuria monitoring . However, previous studies have reported that dipstick readings are unreliable for therapeutic decisions, which should be based on a more precise quantitative measurement, such as first-morning UPCR or 24-h UP . Moreover, manual dipstick tests are associated with possible errors due to the need for manual interpretation, which could be impacted by the reading time. For example, a false positive could appear if a dipstick is submerged in urine for too long . Therefore, further studies assessing the validity of manual dipstick tests for at-home NS monitoring are essential. Our study has several limitations. It was done at a single referral center with a high proportion of SRNS cases and older pediatric patients, so the results might represent the difficult-to-treat pediatric NS population more than the general pediatric NS population. The identification of relapse and remission is ideally based on dipstick tests over three consecutive days , but our samples were collected at a single point. IPNA also recommends using first-morning urine for diagnosing remission. However, we used morning urine samples, not necessarily the first, because of sample transportation considerations. The children voided their first-morning urine at home, generally before 7 AM; it is usual for children in Indonesia to awaken and void their first-morning urine between 5 and 6 AM. We wanted to use fresh urine samples, voided a maximum of one hour before the test, for the dipstick tests. It is generally challenging for patients living far away to reach the hospital in adequate time to suit the ideal urine collection and storage method . For this reason, we used morning urine voided before 9 AM for our UPCR and dipstick samples. This may have an impact on our findings. However, although first-morning urine has the best correlation to 24-h urine collection , second-morning urine taken at 8–10 AM also presents acceptable results, showing comparable analytes, including protein, to first-morning urine . Our sample size had less than 80% power to evaluate the sensitivity of urine dipstick tests ( and ). Future studies should use a multicenter approach with a greater sample size to represent the general pediatric NS population. We recommend involving parents or caregivers to determine the validity of manual dipstick test implementation for home monitoring of pediatric NS. A prospective study using a proteinuria selectivity index can be useful to predict outcomes in pediatric NS and provide more information than quantitative spot and 24-h proteinuria measurements. The proteinuria selectivity index provides information about the degree of tubulointerstitial and glomerular damage and thus can predict functional and clinical outcomes, treatment response, and disease progression . In this study, we could not apply any changes in patient management based on our proteinuria test results. Therefore, a prospective study using our UPCR and dipstick cutoffs and changing the disease management based on the disease status category (remission or relapse) will be beneficial to test the worth of the new cutoffs. Conclusions In our study, UPCR was more sensitive and specific in identifying no remission/relapse in pediatric NS than the automated and manual dipstick tests. UPCR was more sensitive than urinary dipstick tests for identifying complete remission. The optimal UPCR cutoffs for identifying complete remission and no remission/relapse in pediatric NS were 0.4 and 2.0 mg/mg, respectively. Urine dipstick is a highly specific method for identifying remission. The manual dipstick test performed comparably to the automated test. These could be used interchangeably to detect remission or no remission/relapse and could be beneficial for home monitoring. Supplemental Material Click here for additional data file.
Hydrophilic Sulfonate Covalent Organic Frameworks for Serum Glycopeptide Profiling
ea0f4683-3c3d-46c6-a8e6-d201071a9ad6
11900406
Biochemistry[mh]
Post-translational modifications (PTMs), which are chemical covalent modifications and most often dynamically regulated by enzymes , play significant and prevalent roles in numerous biological processes and behaviors of life . Protein glycosylation, one of the most ubiquitous and representative PTMs, exerts a lot of impacts on protein stability, folding, distribution, and activity . Meanwhile, aberrant glycosylation has been reported to be related to the occurrence and development of various diseases, including cancer, Alzheimer’s disease, cardiovascular disease and so on . In other words, the traits of protein glycosylation can provide primary information for clinical diagnosis and the progression of disease . Consequently, the elucidation of glycosylated proteins is essential for understanding disease mechanisms . Mass spectrometry (MS) technology is regarded as a powerful tool for the comprehensive profiling of glycoproteomes due to its excellent sensitivity and high throughput ability . However, it is challenging to directly detect glycoproteins and glycopeptides from complicated biological samples owing to their extremely low abundance, poor ionization efficiency, the interference of co-existing non-glycopeptides and the inherent heterogeneity of glycans . Therefore, the effective enrichment of trace glycopeptides from highly complex mixtures prior to MS analysis is of great importance for successful glycopeptide identification. To date, a variety of enrichment methods have been constructed as efficient strategies for the capture of glycopeptides by disparate enrichment mechanisms, including boric acid affinity chromatography (BAAC) , hydrazide chemistry , lectin affinity chromatography , and hydrophilic interaction liquid chromatography (HILIC) . Among them, the principle of the HILIC method, mainly relying on the differences between hydrophilic glycopeptides and hydrophobic non-glycopeptides, could achieve unbiased enrichment ability towards different glycopeptides with high sensitivity and excellent reproducibility. Currently, plenty of HILIC-based materials, such as hydrophilic monolithic columns , modified magnetic nanoparticles , modified silica materials , polymers and metal–organic frameworks have been widely reported. Unfortunately, due to the limited glycopeptide-specific recognition sites, unsuitable mass transfer kinetics and low relative density of hydrophilic groups, traditional HILIC materials usually suffer from low glycopeptide binding selectivity and low detection sensitivity. Thus, the construction of affinity materials with outstanding hydrophilicity is still a highly desirable work to realize efficient glycopeptide enrichment and in-depth profiling of glycoproteomic research. Covalent organic frameworks (COFs) stand for an emerging category of porous crystalline materials, which are formed by robust covalent bonds of organic building blocks based on dynamic covalent chemistry (DCC) that are mainly composed of H, C, N, O, and B elements . In comparison with other types of porous materials, COFs possess the merits of permanent porosity, tunable pore size, relatively high thermal and chemical stability, large surface area and low crystal density , which establishes them as ideal candidates for wide applications in different fields including gas storage , separation , optoelectricity , sensing , catalysis , etc. Recently, taking advantage of the splendid performance of COFs, their applications in proteomics to enrich targets from complex biological samples have attracted more attention. For example, Zhou’s group successfully synthesized a hydrophilic amino-functionalized TpPa-1 COF that showed good selectivity for glycopeptides . Zhang’s group introduced hydrophilic glutathione to improve the hydrophilicity of COFs (carboxyl-functionalized) using a post-synthetic modification method and enhanced the enrichment performance toward glycopeptides . According to previously reported works, we noticed that amino, carboxyl or hydroxyl-functionalized COFs were generally selected as hydrophilic adsorbents to enrich glycopeptides. However, the capture sensitivity and selectivity of these materials were not so satisfactory and it was complicated and tedious to design the route of post-synthetic modification. As a consequence, developing functionalized COFs with more hydrophilic groups and more glycopeptide recognition sites to improve enrichment performance is among the mainstream pursuits. Sulfonyl groups exhibit excellent hydrophilicity and chemical stability; however, sulfonyl-functionalized materials for the selective enrichment of glycopeptides have rarely been reported . Therefore, it was first introduced here in this work to functionalize the as-prepared COFs. Here in this work, the rational selection from a range of sulfonate-rich COFs and a non-sulfonate TpPa COF are shown. The TpPa-(SO 3 H) 2 (referred to as SCOF-2) together with another three, TpPa-SO 3 H, TpBD-(SO 3 H) 2 , and TFPB-BD-(SO 3 H) 2 (denoted as SCOF-1, SCOF-3, and SCOF-4), were obtained for the successful enrichment of glycopeptides. SCOF-2 demonstrated excellent selectivity and the results of the theoretical calculations were consistent with the Experimental Section . Moreover, it showed extremely low detection limits as well as reusability and binding capacity in glycopeptide enrichment. Thereafter, six human serum samples (three healthy volunteers and three patients with ovarian cancer) were enriched using SCOF-2 to evaluate the performance of the sulfonate-rich COFs in the pretreatment of real biological samples. The implementation of this work will offer a new approach for the effective separation and identification of glycoproteins, and further establish meaningful work for the application of COF materials in post-translational modified proteomics. 2.1. Characterization of the Sulfonate-Rich COFs and TpPa COF SCOF-2 was synthesized based on a conventional Schiff-base reaction . Through alteration of the sulfonated building block with benzene-1,4-diamine (Pa), 2,5-diaminobenzenesulfonic acid (Pa-SO 3 H) or 4,4′-diaminobiphenyl-3,3′-disulfonic acid (BD-(SO 3 H) 2 ), the counterparts of TpPa COF, SCOF-1 and SCOF-3 were synthesized for comparison. Moreover, by enlarging the building block size of Tp with 1,3,5-tris(4-formylphenyl)benzene (TFPB), SCOF-4 was also obtained from the condensation reaction of TFPB and BD-(SO 3 H) 2 (the detailed synthesis procedure of TpPa COF, SCOF-1, SCOF-3 and SCOF-4 are shown in ). The crystalline structure of TpPa COF and SCOFs was first probed using the powder X-ray diffraction (PXRD) technique and theoretical calculation structural simulations. The experimental PXRD results demonstrated that SCOF-2 had two well-resolved peaks at 4.25° and 26.23°, corresponding to the (100) and (001) facets, which matched well with the eclipsed AA stacking mode instead of the staggered AB stacking mode. After Pawley refinement, a set of peaks could duplicate the experimental results with satisfied factors of R p = 4.450% and R wp = 3.742% ( a and ). In addition, the experimental PXRD patterns of TpPa COF, SCOF-1, SCOF-3 and SCOF-4 also exhibited characteristic peaks that are highly consistent with the simulated crystallographic structures and the reported works . These results proved that the TpPa COF and SCOFs were successfully synthesized. Furthermore, the N 2 adsorption/desorption isotherms demonstrated that the TpPa COF and SCOF-1 belonged to both type I and IV sorption isotherm profiles, verifying the coexistence of micropores and mesopores structures. By contrast, SCOF-2 displayed a typical type IV isotherm, indicating the presence of mesopore structures ( b). The Brunauer–Emmett–Teller (BET) surface area of non-sulfonate TpPa COF and SCOF-1 was calculated to be 532 and 274 m 2 g −1 accompanied with an average pore size of 1.9 and 1.1 nm , while SCOF-2 showed a much smaller surface area of 57 m 2 g −1 and the pore size was calculated to be 1.4 nm, which might be attributed to the plentiful sulfonate groups in the pore channel . Additionally, Fourier Transform infrared spectroscopy (FT-IR) was collected to understand their chemical structures. The characteristic peaks at 1582 and 1248 cm −1 were observed for the three COFs, which were assigned to the C=C and C-N stretching vibrations, respectively ( c). As for SCOF-1 and SCOF-2, the newly formed peaks at 1080 and 1025 cm −1 were ascribed to the stretching band of O=S=O, demonstrating the existence of sulfonic groups. As shown in the X-ray photoelectron spectroscopy (XPS) of SCOF-2, the peaks of 531.5 eV (O 1s), 400.0 eV (N 1s), 284.7 eV (C 1s), and 167.9 eV (S 2p) were detected . Moreover, thermogravimetric analysis (TGA) exhibited that TpPa COF, SCOF-1 and SCOF-2 displayed favorable thermal stability under the N 2 atmosphere, and the weight was retained at 34–49% even up to 800 °C ( d). Further, the static water contact angles of the synthesized TpPa COF and SCOFs are shown in e. Compared with TpPa COF, the water contact angle of SCOF-2 decreased by about 47° after introducing hydrophilic sulfonate groups, which laid the foundation for the selective adsorption of glycopeptides. The zeta potential measurements illustrated the surface charges of the TpPa COF and SCOFs in a buffered solution (pH = 6, the pH value of the loading buffer). The zeta potential of the SCOFs presented negative values of −25.37, −26.13, −16.07 and −19.33 mV, respectively, indicating the strong electronegativity nature of the SCOFs that arise from the sulfonate groups . According to the scanning electron microscopy (SEM) as well as the transmission electron microscopy (TEM) images, the morphology of TpPa COF was composed of short nanofibers ; SCOF-2, SCOF-3 and SCOF-4 displayed uniform sheets morphologies ; while SCOF-1 showed relatively smooth and long nanofiber morphology . f exhibits the high-angle annular dark-field scanning TEM (HAADF-STEM) image and elemental mapping images of SCOF-2, and the components C, N, O, and S were detected concurrently. The TpPa COF and other SCOFs were also characterized by elemental mapping in . 2.2. Rational Selection of Sulfonate-Rich COFs and TpPa COF for Glycopeptide Enrichment Compared with the reported traditional enrichment materials, COFs could be precisely customized and preliminarily designed at the molecular level via the rational selection of porous organic building blocks. It could provide theoretical guidance information to screen possible enrichment mediums with hydrophilicity, affinity sites, and molecular linkage within them, which are three probable key factors affecting the enrichment material toward glycopeptides. Therefore, one non-sulfonate TpPa COF and four sulfonate-rich COFs (SCOF-1, SCOF-2, SCOF-3, and SCOF-4) with different topologies and different building blocks were selected as enrichment materials to investigate the selectivity for glycopeptides. We rationalized the five COFs with hydrophilicity through the water contact angle as the main factor. With the weakest hydrophilicity, TpPa COF showed a water contact angle of 88.7° and could only enrich 10 glycopeptides ( a). After introducing a hydrophilic sulfonate group to the building block of SCOF-1, the water contact angle decreased by over 55° demonstrating enhanced hydrophilicity. As shown in b, 22 glycopeptides with a relatively clean MS background were observed. Moreover, we introduced two functional hydrophilic sulfonate groups to synthesize SCOF-2. Surprisingly, 28 glycopeptides with high signal intensities were detected even though the water contact angle of SCOF-2 was a little higher than SCOF-1, which was probably attributed to the abundant affinity sites between the glycopeptides and the sulfonate group ( c). Additionally, in order to further evaluate the interactions between the glycan moieties of the glycopeptides and the sulfonate group, we enlarged the size of the building blocks and obtained SCOF-3 and SCOF-4. As depicted in d, 21 glycopeptides and a few non-glycopeptides were observed in the MS spectrum, which exhibited a better enrichment selectivity than non-sulfonate TpPa COF but worse than SCOF-2. Amazingly, it is worth noting that SCOF-4 showed the strongest hydrophilicity with a water contact angle of 31.0 o among all the enrichment materials. However, only four glycopeptides with low signal intensities were detected, suggesting that SCOF-4 had the worst enrichment selectivity toward glycopeptides ( e). On the basis of the above results, SCOF-2 was further explored as the best enrichment material for excellent enrichment performance even though its hydrophilicity was not the strongest compared to other SCOF materials. 2.3. Theoretical Calculations of Adsorption Modes Between Different COF Models and Monosaccharides Glycans are diverse structures that are composed of various monosaccharide building blocks including mannose (Man), galactose (Gal), fucose (Fuc), N -acetyl-glucosamine (GlcNAc), and others. The primary terminal Neu5Ac unit is one of the most common forms of sialylated glycans, which could interact with the glycans on the proteins to provide adhesion and recognition. Afterward, the possible adsorption modes between the five COFs and the representative Neu5Ac were calculated. The adsorption dynamics of Neu5Ac with different COF models were carried out in the CP2K code with density functional theory (DFT) calculation methods (the detailed DFT calculations are shown in ). As depicted in a, one kind of hydrogen bond was formed between the carbonyl moiety in the TpPa COF framework and the hydroxyl of Neu5Ac. After introducing mono-sulfonate in the SCOF-1 framework, three kinds of hydrogen bonds were formed. One was formed between the sulfonate moiety and the hydroxyl of Neu5Ac, the others originated from the interaction between the sulfonate moiety and the carboxylic acid of Neu5Ac ( b). Noticeably, due to the abundant affinity sites of sulfonate in SCOF-2, four kinds of hydrogen bonds were formed not only between the sulfonate moiety and the carboxylic acid of Neu5Ac but also between the sulfonate moiety and the dihydroxyl of Neu5Ac ( c). As for SCOF-3, because of the enlarged building block size, the adsorption structure was not stable though the adsorption mode was similar to SCOF-1 ( d). As depicted in e, due to having the largest building block size, only one kind of hydrogen bond was formed between the sulfonate moiety and the carboxylic acid of Neu5Ac, demonstrating that the adsorption mode of SCOF-4 was not stable compared to the other COFs. The side view of the possible adsorption modes between the five COFs and the Neu5Ac is also shown in . Also, the adsorption energy (ΔE) between the Neu5Ac and the different COF models were calculated and the results are shown in f. The maximal ΔE of SCOF-2 with -106.02 kcal mol −1 corresponded to a stable adsorption structure and the more negative energy value was more conducive to adsorption, which exhibited the highest among the other COF models. The results showed that SCOF-2 exhibited satisfactory chemoselectivity toward Neu5Ac, suggesting the potential enrichment selectivity toward glycopeptides. 2.4. Investigation of the Selectivity of SCOF-2 for Glycopeptide Enrichment In order to evaluate the enrichment performance of SCOF-2 for glycopeptides, a standard model glycoprotein horseradish peroxidase (HRP) tryptic digest was pretreated with SCOF-2 for sequential loading, washing, eluting and finally the eluant was analyzed using matrix assisted laser desorption ionization time of flight mass spectrometry (MALDI-TOF MS). Based on the retention mechanism, the composition proportion of the loading buffer would have a greater influence on the result of glycopeptide enrichment because of the change of polarity. In this work, different compositions of loading buffer were achieved by altering the concentration of ACN (80%, 85%, 90%, and 95%) and TFA (0.05%, 0.1%, 0.5%, and 1.0%) as the mixed solutions were examined . With the proportion of 90% ACN and 0.1% TFA, the number of observed glycopeptides with high signal intensities was gradually obtained, demonstrating that the interactions between SCOF-2 and the glycopeptides became stronger. Consequently, we adopted 90% ACN/0.1% TFA ( v / v ) as the optimal loading buffer. Moreover, to obtain the maximum number of glycopeptide enrichment levels using SCOF-2, four different elution buffers (30% ACN and 0.05%, 0.1%, 0.5%, and 1.0% TFA) were investigated using the optimal loading buffer . The greatest number of glycopeptides was obtained when using 30% ACN/0.1% TFA ( v / v ) as the elution buffer. Additionally, human serum immunoglobulin G (IgG) was chosen as another model glycoprotein to verify the optimal loading buffer and elution buffer. As shown in , considering the number of detected glycopeptides and the signal intensities of glycopeptides, we finally selected 90% ACN/1.5% TFA ( v / v ) and 30% ACN/0.5% TFA ( v / v ) as the best loading and elution buffer, respectively. The detailed information on the detected glycopeptides from HRP and IgG tryptic digests are summarized in , respectively. Under the optimal enrichment conditions, HRP tryptic digest was first employed for glycopeptide enrichment using SCOF-2. Before enrichment, only five glycopeptide peaks were identified with low signal intensities due to the interference of non-glycopeptides, while 28 glycopeptides appeared with a transparent spectrum background after treatment with SCOF-2 . Furthermore, human serum IgG tryptic digest was also chosen to evaluate the universality of SCOF-2 in glycopeptide enrichment. Similarly, 16 glycopeptides could be observed and the peak intensities of glycopeptides efficiently enhanced after enrichment, while only four peaks of glycopeptides with lower signal-to-noise (S/N) ratios could be detected without enrichment . 2.5. Analytical Performance of SCOF-2 for Glycopeptide Enrichment To further confirm the enrichment performance of SCOF-2, the selectivity and sensitivity of SCOF-2 toward glycopeptides as two vital indicators were also investigated. Bovine serum albumin (BSA) was employed as the interfering non-glycoprotein. Complex samples consisting of HRP tryptic digest and BSA tryptic digest with different molar ratios (1:10, 1:100, 1:1000, and 1:5000) were used to assess its selectivity. As illustrated in , when the molar ratios of HRP:BSA = 1:10 and 1:100, 20 and 16 glycopeptides could be detected, though there were a few non-glycopeptides in the spectrum. Remarkably, when the molar ratio was increased to 1:1000, the 11 glycopeptides remained detectable ( a). It is worth pointing out that even though the molar ratio significantly increased to 1:5000, eight glycopeptides could still be distinctly identified, proving the excellent enrichment selectivity of SCOF-2 toward glycopeptides ( b). Subsequently, the detection sensitivity of SCOF-2 for glycopeptides was further determined by regularly reducing the concentration of the HRP tryptic digest. When the HRP digest concentration was decreased to 10 fmol μL −1 or even at a low concentration of 1.0 fmol μL −1 , SCOF-2 could still easily enrich the glycopeptides . Then, when the HRP digest concentration was reduced to 0.1 fmol μL −1 , six glycopeptides remained visible after enrichment ( c). Moreover, at an even lower concentration of 0.01 fmol μL −1 , two glycopeptides exhibited predominance in the spectrum ( d). The results demonstrated that SCOF-2 had exceptional detection sensitivity for glycopeptide enrichment. To sum up, SCOF-2 showed outstanding selectivity and sensitivity towards glycopeptides, better than that of the previously reported hydrophilic materials listed in . Thus, it could be permitted for the detection of glycopeptides in genuine complex samples. The binding capacity for glycopeptides is measured using the reported method . It is one of the key factors for any novel enrichment material, which was investigated by the addition of different amounts of SCOF-2 to a fixed amount of 150 μg HRP digest. The S/N ratios of four selected glycopeptides ( m / z = 2850.7, 3572.8, 3672.1 and 4984.3) in the elution were analyzed using MALDI-TOF MS. It was observed that the S/N ratios of these peaks progressively increased and then maintained saturation with an increasing content of SCOF-2 . Thus, the binding capacity was assumed to be 150 mg g −1 . This result indicated that SCOF-2 had a high binding capacity for glycopeptides, which might be attributed to its abundant binding sites and outstanding hydrophilicity. In addition, the reusability and stability of using SCOF-2 for the enrichment of glycopeptides were explored using an HRP tryptic digest. In order to investigate reusability, the previously used SCOF-2 was rinsed with an elution buffer to remove residues before each enrichment step. As depicted in , compared with the first cycle, minimal changes were observed of the obtained glycopeptide numbers and signal intensities in the spectrum after five cycles. Even after being stored for two weeks at room temperature, SCOF-2 exhibited the same excellent enrichment performance as the first time. A typical glycopeptide was selected as an indicator and recorded the signal intensities, it could be clearly noticed that the intensities of the glycopeptide peaks changed slightly , which indicated its great reusability and long-term stability. The quantitative stable isotope dimethyl labeling method was investigated to estimate the recovery. As shown in , the enrichment recovery (L/H) was determined using the peak intensity of the light-tagged glycopeptide with its heavy-tagged counterpart and the recovery of SCOF-2 for glycopeptides was measured at 89.1%, demonstrating the excellent recovery capability of SCOF-2. The result proved the great potential of SCOF-2 for the analysis of glycopeptides in complex real biological samples. 2.6. Application of SCOF-2 in Glycopeptide Enrichment from Tryptic Digests of Proteins Extracted from Human Serum Human serum is easily obtainable and appropriate for clinical testing, thus the isolation and subsequent identification of glycopeptides in serum can be utilized for the discovery of tumor biomarkers, which can provide new approaches for developing diagnostic and therapeutic strategies. Here, we analyzed glycopeptides in human serum samples from ovarian cancer patients (Group HK, n = 3) and healthy volunteers (Group CK, n = 3) and performed three parallel experiments using the SCOF-2 workflow. After capturing glycopeptides in the serum of ovarian cancer patients and healthy volunteers with SCOF-2, the obtained peptides were analyzed with nano liquid chromatography mass spectrometry/mass spectrometry (nanoLC-MS/MS). The common numbers of 196 glycopeptides and 227 glycosylation sites mapping to 82 glycoproteins were identified in the ovarian cancer patients, compared with 194 glycopeptides and 225 glycosylation sites mapping to 82 glycoproteins in the healthy volunteers ( a and , the details were listed in ). The Venn diagram in summarizes the glycopeptides, glycoproteins and glycosylation site enrichment performance of SCOF-2 in three experimental replicates of human serum from ovarian cancer patients and healthy volunteers. Noticeably, it is worth noting that over 80% of the glycopeptides from ovarian cancer patients and healthy volunteers were mono-glycosylation sites while multi-glycosylation events per peptide were identified in less than 4% of the glycopeptides . In order to understand the biological functions of various genomes and evaluate the biological significance of the identified glycoproteins, we investigated the gene ontology (GO) enrichment by using the gene ontology database . In biological processes, glycoproteins involved in innate immune response were up-regulated in ovarian cancer patients compared to healthy controls, revealing differences between cancer patients and healthy individuals. On the contrary, in molecular functions, glycoproteins involved in immunoglobulin receptor binding were down-regulated in ovarian cancer patients compared to healthy controls. In terms of cellular components, glycoproteins were mostly correlated to peptides in the blood microparticle GO term, which corresponded to the fact that the HK and CK sample groups were derived from human serum. To comprehensively evaluate the differences between ovarian cancer patients and healthy controls, a versatile statistical analysis and quantitative comparison of the expression level of the protein PTMs were explored. Principal component analysis (PCA) based on the enrichment results of quantitative comparison proved that the HK and CK groups were partly separated, revealing the noteworthy differences between the ovarian cancer patient group and the healthy control group. The results of the three parallel experiments of the healthy control group were comparatively discrete, indicating more important individual differences in the healthy control patients ( b). In agreement with PCA, ovarian cancer patients and healthy volunteers showed heterogeneous mapping of glycoprotein abundance ( c). The expression levels of the protein outlines of ovarian cancer patients and healthy controls captured by SCOF-2 form distinct clusters when employing hierarchical cluster analysis (HCA), with the clusters colored in blue, red, and orange. In brief, the excellent experimental results mentioned above demonstrate a unique distribution profile of glycopeptides enriched by SCOF-2, which could find cancer-specific relationships between healthy controls and ovarian cancer patients, allowing the identification of target glycoproteins. SCOF-2 was synthesized based on a conventional Schiff-base reaction . Through alteration of the sulfonated building block with benzene-1,4-diamine (Pa), 2,5-diaminobenzenesulfonic acid (Pa-SO 3 H) or 4,4′-diaminobiphenyl-3,3′-disulfonic acid (BD-(SO 3 H) 2 ), the counterparts of TpPa COF, SCOF-1 and SCOF-3 were synthesized for comparison. Moreover, by enlarging the building block size of Tp with 1,3,5-tris(4-formylphenyl)benzene (TFPB), SCOF-4 was also obtained from the condensation reaction of TFPB and BD-(SO 3 H) 2 (the detailed synthesis procedure of TpPa COF, SCOF-1, SCOF-3 and SCOF-4 are shown in ). The crystalline structure of TpPa COF and SCOFs was first probed using the powder X-ray diffraction (PXRD) technique and theoretical calculation structural simulations. The experimental PXRD results demonstrated that SCOF-2 had two well-resolved peaks at 4.25° and 26.23°, corresponding to the (100) and (001) facets, which matched well with the eclipsed AA stacking mode instead of the staggered AB stacking mode. After Pawley refinement, a set of peaks could duplicate the experimental results with satisfied factors of R p = 4.450% and R wp = 3.742% ( a and ). In addition, the experimental PXRD patterns of TpPa COF, SCOF-1, SCOF-3 and SCOF-4 also exhibited characteristic peaks that are highly consistent with the simulated crystallographic structures and the reported works . These results proved that the TpPa COF and SCOFs were successfully synthesized. Furthermore, the N 2 adsorption/desorption isotherms demonstrated that the TpPa COF and SCOF-1 belonged to both type I and IV sorption isotherm profiles, verifying the coexistence of micropores and mesopores structures. By contrast, SCOF-2 displayed a typical type IV isotherm, indicating the presence of mesopore structures ( b). The Brunauer–Emmett–Teller (BET) surface area of non-sulfonate TpPa COF and SCOF-1 was calculated to be 532 and 274 m 2 g −1 accompanied with an average pore size of 1.9 and 1.1 nm , while SCOF-2 showed a much smaller surface area of 57 m 2 g −1 and the pore size was calculated to be 1.4 nm, which might be attributed to the plentiful sulfonate groups in the pore channel . Additionally, Fourier Transform infrared spectroscopy (FT-IR) was collected to understand their chemical structures. The characteristic peaks at 1582 and 1248 cm −1 were observed for the three COFs, which were assigned to the C=C and C-N stretching vibrations, respectively ( c). As for SCOF-1 and SCOF-2, the newly formed peaks at 1080 and 1025 cm −1 were ascribed to the stretching band of O=S=O, demonstrating the existence of sulfonic groups. As shown in the X-ray photoelectron spectroscopy (XPS) of SCOF-2, the peaks of 531.5 eV (O 1s), 400.0 eV (N 1s), 284.7 eV (C 1s), and 167.9 eV (S 2p) were detected . Moreover, thermogravimetric analysis (TGA) exhibited that TpPa COF, SCOF-1 and SCOF-2 displayed favorable thermal stability under the N 2 atmosphere, and the weight was retained at 34–49% even up to 800 °C ( d). Further, the static water contact angles of the synthesized TpPa COF and SCOFs are shown in e. Compared with TpPa COF, the water contact angle of SCOF-2 decreased by about 47° after introducing hydrophilic sulfonate groups, which laid the foundation for the selective adsorption of glycopeptides. The zeta potential measurements illustrated the surface charges of the TpPa COF and SCOFs in a buffered solution (pH = 6, the pH value of the loading buffer). The zeta potential of the SCOFs presented negative values of −25.37, −26.13, −16.07 and −19.33 mV, respectively, indicating the strong electronegativity nature of the SCOFs that arise from the sulfonate groups . According to the scanning electron microscopy (SEM) as well as the transmission electron microscopy (TEM) images, the morphology of TpPa COF was composed of short nanofibers ; SCOF-2, SCOF-3 and SCOF-4 displayed uniform sheets morphologies ; while SCOF-1 showed relatively smooth and long nanofiber morphology . f exhibits the high-angle annular dark-field scanning TEM (HAADF-STEM) image and elemental mapping images of SCOF-2, and the components C, N, O, and S were detected concurrently. The TpPa COF and other SCOFs were also characterized by elemental mapping in . Compared with the reported traditional enrichment materials, COFs could be precisely customized and preliminarily designed at the molecular level via the rational selection of porous organic building blocks. It could provide theoretical guidance information to screen possible enrichment mediums with hydrophilicity, affinity sites, and molecular linkage within them, which are three probable key factors affecting the enrichment material toward glycopeptides. Therefore, one non-sulfonate TpPa COF and four sulfonate-rich COFs (SCOF-1, SCOF-2, SCOF-3, and SCOF-4) with different topologies and different building blocks were selected as enrichment materials to investigate the selectivity for glycopeptides. We rationalized the five COFs with hydrophilicity through the water contact angle as the main factor. With the weakest hydrophilicity, TpPa COF showed a water contact angle of 88.7° and could only enrich 10 glycopeptides ( a). After introducing a hydrophilic sulfonate group to the building block of SCOF-1, the water contact angle decreased by over 55° demonstrating enhanced hydrophilicity. As shown in b, 22 glycopeptides with a relatively clean MS background were observed. Moreover, we introduced two functional hydrophilic sulfonate groups to synthesize SCOF-2. Surprisingly, 28 glycopeptides with high signal intensities were detected even though the water contact angle of SCOF-2 was a little higher than SCOF-1, which was probably attributed to the abundant affinity sites between the glycopeptides and the sulfonate group ( c). Additionally, in order to further evaluate the interactions between the glycan moieties of the glycopeptides and the sulfonate group, we enlarged the size of the building blocks and obtained SCOF-3 and SCOF-4. As depicted in d, 21 glycopeptides and a few non-glycopeptides were observed in the MS spectrum, which exhibited a better enrichment selectivity than non-sulfonate TpPa COF but worse than SCOF-2. Amazingly, it is worth noting that SCOF-4 showed the strongest hydrophilicity with a water contact angle of 31.0 o among all the enrichment materials. However, only four glycopeptides with low signal intensities were detected, suggesting that SCOF-4 had the worst enrichment selectivity toward glycopeptides ( e). On the basis of the above results, SCOF-2 was further explored as the best enrichment material for excellent enrichment performance even though its hydrophilicity was not the strongest compared to other SCOF materials. Glycans are diverse structures that are composed of various monosaccharide building blocks including mannose (Man), galactose (Gal), fucose (Fuc), N -acetyl-glucosamine (GlcNAc), and others. The primary terminal Neu5Ac unit is one of the most common forms of sialylated glycans, which could interact with the glycans on the proteins to provide adhesion and recognition. Afterward, the possible adsorption modes between the five COFs and the representative Neu5Ac were calculated. The adsorption dynamics of Neu5Ac with different COF models were carried out in the CP2K code with density functional theory (DFT) calculation methods (the detailed DFT calculations are shown in ). As depicted in a, one kind of hydrogen bond was formed between the carbonyl moiety in the TpPa COF framework and the hydroxyl of Neu5Ac. After introducing mono-sulfonate in the SCOF-1 framework, three kinds of hydrogen bonds were formed. One was formed between the sulfonate moiety and the hydroxyl of Neu5Ac, the others originated from the interaction between the sulfonate moiety and the carboxylic acid of Neu5Ac ( b). Noticeably, due to the abundant affinity sites of sulfonate in SCOF-2, four kinds of hydrogen bonds were formed not only between the sulfonate moiety and the carboxylic acid of Neu5Ac but also between the sulfonate moiety and the dihydroxyl of Neu5Ac ( c). As for SCOF-3, because of the enlarged building block size, the adsorption structure was not stable though the adsorption mode was similar to SCOF-1 ( d). As depicted in e, due to having the largest building block size, only one kind of hydrogen bond was formed between the sulfonate moiety and the carboxylic acid of Neu5Ac, demonstrating that the adsorption mode of SCOF-4 was not stable compared to the other COFs. The side view of the possible adsorption modes between the five COFs and the Neu5Ac is also shown in . Also, the adsorption energy (ΔE) between the Neu5Ac and the different COF models were calculated and the results are shown in f. The maximal ΔE of SCOF-2 with -106.02 kcal mol −1 corresponded to a stable adsorption structure and the more negative energy value was more conducive to adsorption, which exhibited the highest among the other COF models. The results showed that SCOF-2 exhibited satisfactory chemoselectivity toward Neu5Ac, suggesting the potential enrichment selectivity toward glycopeptides. In order to evaluate the enrichment performance of SCOF-2 for glycopeptides, a standard model glycoprotein horseradish peroxidase (HRP) tryptic digest was pretreated with SCOF-2 for sequential loading, washing, eluting and finally the eluant was analyzed using matrix assisted laser desorption ionization time of flight mass spectrometry (MALDI-TOF MS). Based on the retention mechanism, the composition proportion of the loading buffer would have a greater influence on the result of glycopeptide enrichment because of the change of polarity. In this work, different compositions of loading buffer were achieved by altering the concentration of ACN (80%, 85%, 90%, and 95%) and TFA (0.05%, 0.1%, 0.5%, and 1.0%) as the mixed solutions were examined . With the proportion of 90% ACN and 0.1% TFA, the number of observed glycopeptides with high signal intensities was gradually obtained, demonstrating that the interactions between SCOF-2 and the glycopeptides became stronger. Consequently, we adopted 90% ACN/0.1% TFA ( v / v ) as the optimal loading buffer. Moreover, to obtain the maximum number of glycopeptide enrichment levels using SCOF-2, four different elution buffers (30% ACN and 0.05%, 0.1%, 0.5%, and 1.0% TFA) were investigated using the optimal loading buffer . The greatest number of glycopeptides was obtained when using 30% ACN/0.1% TFA ( v / v ) as the elution buffer. Additionally, human serum immunoglobulin G (IgG) was chosen as another model glycoprotein to verify the optimal loading buffer and elution buffer. As shown in , considering the number of detected glycopeptides and the signal intensities of glycopeptides, we finally selected 90% ACN/1.5% TFA ( v / v ) and 30% ACN/0.5% TFA ( v / v ) as the best loading and elution buffer, respectively. The detailed information on the detected glycopeptides from HRP and IgG tryptic digests are summarized in , respectively. Under the optimal enrichment conditions, HRP tryptic digest was first employed for glycopeptide enrichment using SCOF-2. Before enrichment, only five glycopeptide peaks were identified with low signal intensities due to the interference of non-glycopeptides, while 28 glycopeptides appeared with a transparent spectrum background after treatment with SCOF-2 . Furthermore, human serum IgG tryptic digest was also chosen to evaluate the universality of SCOF-2 in glycopeptide enrichment. Similarly, 16 glycopeptides could be observed and the peak intensities of glycopeptides efficiently enhanced after enrichment, while only four peaks of glycopeptides with lower signal-to-noise (S/N) ratios could be detected without enrichment . To further confirm the enrichment performance of SCOF-2, the selectivity and sensitivity of SCOF-2 toward glycopeptides as two vital indicators were also investigated. Bovine serum albumin (BSA) was employed as the interfering non-glycoprotein. Complex samples consisting of HRP tryptic digest and BSA tryptic digest with different molar ratios (1:10, 1:100, 1:1000, and 1:5000) were used to assess its selectivity. As illustrated in , when the molar ratios of HRP:BSA = 1:10 and 1:100, 20 and 16 glycopeptides could be detected, though there were a few non-glycopeptides in the spectrum. Remarkably, when the molar ratio was increased to 1:1000, the 11 glycopeptides remained detectable ( a). It is worth pointing out that even though the molar ratio significantly increased to 1:5000, eight glycopeptides could still be distinctly identified, proving the excellent enrichment selectivity of SCOF-2 toward glycopeptides ( b). Subsequently, the detection sensitivity of SCOF-2 for glycopeptides was further determined by regularly reducing the concentration of the HRP tryptic digest. When the HRP digest concentration was decreased to 10 fmol μL −1 or even at a low concentration of 1.0 fmol μL −1 , SCOF-2 could still easily enrich the glycopeptides . Then, when the HRP digest concentration was reduced to 0.1 fmol μL −1 , six glycopeptides remained visible after enrichment ( c). Moreover, at an even lower concentration of 0.01 fmol μL −1 , two glycopeptides exhibited predominance in the spectrum ( d). The results demonstrated that SCOF-2 had exceptional detection sensitivity for glycopeptide enrichment. To sum up, SCOF-2 showed outstanding selectivity and sensitivity towards glycopeptides, better than that of the previously reported hydrophilic materials listed in . Thus, it could be permitted for the detection of glycopeptides in genuine complex samples. The binding capacity for glycopeptides is measured using the reported method . It is one of the key factors for any novel enrichment material, which was investigated by the addition of different amounts of SCOF-2 to a fixed amount of 150 μg HRP digest. The S/N ratios of four selected glycopeptides ( m / z = 2850.7, 3572.8, 3672.1 and 4984.3) in the elution were analyzed using MALDI-TOF MS. It was observed that the S/N ratios of these peaks progressively increased and then maintained saturation with an increasing content of SCOF-2 . Thus, the binding capacity was assumed to be 150 mg g −1 . This result indicated that SCOF-2 had a high binding capacity for glycopeptides, which might be attributed to its abundant binding sites and outstanding hydrophilicity. In addition, the reusability and stability of using SCOF-2 for the enrichment of glycopeptides were explored using an HRP tryptic digest. In order to investigate reusability, the previously used SCOF-2 was rinsed with an elution buffer to remove residues before each enrichment step. As depicted in , compared with the first cycle, minimal changes were observed of the obtained glycopeptide numbers and signal intensities in the spectrum after five cycles. Even after being stored for two weeks at room temperature, SCOF-2 exhibited the same excellent enrichment performance as the first time. A typical glycopeptide was selected as an indicator and recorded the signal intensities, it could be clearly noticed that the intensities of the glycopeptide peaks changed slightly , which indicated its great reusability and long-term stability. The quantitative stable isotope dimethyl labeling method was investigated to estimate the recovery. As shown in , the enrichment recovery (L/H) was determined using the peak intensity of the light-tagged glycopeptide with its heavy-tagged counterpart and the recovery of SCOF-2 for glycopeptides was measured at 89.1%, demonstrating the excellent recovery capability of SCOF-2. The result proved the great potential of SCOF-2 for the analysis of glycopeptides in complex real biological samples. Human serum is easily obtainable and appropriate for clinical testing, thus the isolation and subsequent identification of glycopeptides in serum can be utilized for the discovery of tumor biomarkers, which can provide new approaches for developing diagnostic and therapeutic strategies. Here, we analyzed glycopeptides in human serum samples from ovarian cancer patients (Group HK, n = 3) and healthy volunteers (Group CK, n = 3) and performed three parallel experiments using the SCOF-2 workflow. After capturing glycopeptides in the serum of ovarian cancer patients and healthy volunteers with SCOF-2, the obtained peptides were analyzed with nano liquid chromatography mass spectrometry/mass spectrometry (nanoLC-MS/MS). The common numbers of 196 glycopeptides and 227 glycosylation sites mapping to 82 glycoproteins were identified in the ovarian cancer patients, compared with 194 glycopeptides and 225 glycosylation sites mapping to 82 glycoproteins in the healthy volunteers ( a and , the details were listed in ). The Venn diagram in summarizes the glycopeptides, glycoproteins and glycosylation site enrichment performance of SCOF-2 in three experimental replicates of human serum from ovarian cancer patients and healthy volunteers. Noticeably, it is worth noting that over 80% of the glycopeptides from ovarian cancer patients and healthy volunteers were mono-glycosylation sites while multi-glycosylation events per peptide were identified in less than 4% of the glycopeptides . In order to understand the biological functions of various genomes and evaluate the biological significance of the identified glycoproteins, we investigated the gene ontology (GO) enrichment by using the gene ontology database . In biological processes, glycoproteins involved in innate immune response were up-regulated in ovarian cancer patients compared to healthy controls, revealing differences between cancer patients and healthy individuals. On the contrary, in molecular functions, glycoproteins involved in immunoglobulin receptor binding were down-regulated in ovarian cancer patients compared to healthy controls. In terms of cellular components, glycoproteins were mostly correlated to peptides in the blood microparticle GO term, which corresponded to the fact that the HK and CK sample groups were derived from human serum. To comprehensively evaluate the differences between ovarian cancer patients and healthy controls, a versatile statistical analysis and quantitative comparison of the expression level of the protein PTMs were explored. Principal component analysis (PCA) based on the enrichment results of quantitative comparison proved that the HK and CK groups were partly separated, revealing the noteworthy differences between the ovarian cancer patient group and the healthy control group. The results of the three parallel experiments of the healthy control group were comparatively discrete, indicating more important individual differences in the healthy control patients ( b). In agreement with PCA, ovarian cancer patients and healthy volunteers showed heterogeneous mapping of glycoprotein abundance ( c). The expression levels of the protein outlines of ovarian cancer patients and healthy controls captured by SCOF-2 form distinct clusters when employing hierarchical cluster analysis (HCA), with the clusters colored in blue, red, and orange. In brief, the excellent experimental results mentioned above demonstrate a unique distribution profile of glycopeptides enriched by SCOF-2, which could find cancer-specific relationships between healthy controls and ovarian cancer patients, allowing the identification of target glycoproteins. Aberrant protein glycosylation is closely associated with a number of biological processes and diseases. However, characterizing the types of post-translational modifications (PTMs) from the complex biological samples is challenging for comprehensive glycoproteomic analysis . Therefore, the selective capture of low-concentration glycoproteins and glycopeptides from complex mixtures is a significant tool for in-depth glycoproteome researchers. To date, plenty of HILIC-based materials, such as graphene oxide , polymer nanoparticles , and metal–organic frameworks have been widely reported. Unfortunately, due to the limited glycopeptide-specific recognition sites, unsuitable mass transfer kinetics and low relative density of hydrophilic groups, traditional HILIC materials usually suffer from low glycopeptide binding selectivity and low detection sensitivity. Covalent organic frameworks (COFs), as a class of long-range ordered porous organic materials, have shown great potential in many aspects owing to the extensive tunability . The sulfonyl group exhibited excellent hydrophilicity and chemical stability; however, sulfonyl-functionalized COFs for the selective enrichment of glycopeptides have rarely been reported. Therefore, in this work, we attempted to functionalize the as-prepared COFs. Herein, sulfonate-rich COFs and a non-sulfonate TpPa COF were introduced. SCOF-2 together with another three, SCOF-1, SCOF-3, and SCOF-4, was obtained for the successful enrichment of glycopeptides. Our study found that a total of 28 and 16 glycopeptides could be efficiently detected from HRP and IgG tryptic digest, respectively. Moreover, the results of the theoretical calculations were consistent before the experiment. The as-prepared SCOF-2 has an ultralow detection limit (0.01 fmol μL −1 ), excellent enrichment selectivity (molar ratio HRP:BSA = 1:5000), satisfactory recovery rate (89.1%), high adsorption capacity (150 mg g −1 ) and good reusability in the individual enrichment. Meanwhile, by using SCOF-2 adsorbent, 196 and 194 endogenous glycopeptides in the serum of ovarian cancer patients and healthy people were successfully enriched and identified. The incorporation of multiple hydrophilic sulfonate groups within the SCOF-2 structure induces a substantial enhancement in surface hydrophilicity. This hydrophilicity amplification originates from the synergistic effects of the formation of hydrogen-bonding networks through exposed sulfonic acid moieties. Notably, the high areal density of these hydrophilic functionalities creates a spatially ordered recognition matrix that exhibits exceptional glycopeptides affinity. These findings position SCOF-2 as an outstanding material for phosphoproteomic studies, particularly in low-abundance glycopeptides detection, which exhibits excellent enrichment performance than the existing methods and other enrichment strategies. In summary, we constructed a novel hydrophilic sulfonate-rich COF (SCOF-2) for the enrichment of glycopeptides, demonstrating the enrichment performance from standard protein digests with good specificity, high sensitivity and outstanding stability. It also demonstrated excellent enrichment capacity and reproducibility toward glycopeptides. As a result, SCOF-2 could successfully enrich 196 glycopeptides from the human serum of ovarian cancer patients, revealing its superiority and feasibility in the selective enrichment of glycopeptides. Amazingly, proteomic analysis of the captured proteins proved that it was possible to distinguish healthy controls from ovarian cancer patients. In addition, the excellent performance of SCOF-2 in the application of complex biological samples could provide great potential for the early clinical diagnosis of disease biomarkers caused by abnormal protein glycosylation. 4.1. Synthesis of SCOF-2 SCOF-2 was prepared according to previously reported works procedure with a slight modification . Typically, 63 mg (0.3 mmol) of triformylphloroglucinol (Tp) and 120.6 mg (0.45 mmol) of 2,5-diaminobenzene-1,4-disulfonic acid (Pa-(SO 3 H) 2 ) were added to a Pyrex tube in presence of 1.5 mL 1,4-dioxane, 1.5 mL mesitylene and 0.5 mL of 6 M aqueous acetic acid (AcOH). This mixture was then sonicated for 20 min to form a homogeneous suspension. The tube was subsequently flash-frozen under liquid nitrogen temperature (77 K) and degassed by three freeze–pump–thaw cycles. Then, the reaction mixture was sealed and heated at 120 °C for 72 h under a static condition. After cooling, the deep brown precipitate was collected by filtration and washed with copious amounts of dimethylacetamide, deionized water and anhydrous tetrahydrofuran. The material was then dried under vacuum at 120 °C for 12 h to obtain SCOF-2 as a deep brown powder. The detailed synthesis procedure of TpPa COF, SCOF-1, SCOF-3 and SCOF-4 are shown in . 4.2. Enrichment of Glycopeptides from Tryptic Digests of Standard Proteins The detailed process of glycopeptides enrichment was described as follows. Firstly, 1.0 mg SCOF-2 was placed in a centrifuge tube and ultrasonically dispersed in 100 μL of loading buffer (90% ACN/0.1% TFA ( v / v ) for HRP tryptic digest or 90% ACN/1.5% TFA ( v / v ) for IgG tryptic digest). Then, 1.0 μL tryptic digest of standard protein was added to the centrifuge tube and the mixture was incubated at 37 °C for 30 min. After enrichment, SCOF-2 was then separated by centrifugation and washed three times with loading buffer to remove non-glycopeptides. Finally, 10 μL elution buffer (30% ACN/0.1% TFA ( v / v ) for HRP tryptic digest or 30% ACN/0.5% TFA ( v / v ) for IgG tryptic digest) was added to collect the adsorbed hydrophilic glycopeptides and analyzed using MALDI-TOF MS. 4.3. Enrichment of Glycopeptides from Tryptic Digest of Human Serum Enrichment of glycopeptides from tryptic digests of human serum samples was similar to the above procedure. In the first place, lyophilized human serum digest was redissolved in 100 μL of loading buffer (90% ACN/0.1% TFA ( v / v )) and incubated with 1.0 mg of SCOF-2 for 30 min. Then, the mixture was centrifugated for 6 min to remove supernatant and washed with 100 μL of loading buffer three times to remove non-glycopeptides. Thereafter, the captured glycopeptides were eluted using 10 μL elution buffer (30% ACN/0.1% TFA ( v / v )) for 10 min. Finally, the collected solution was lyophilized and desalted for further deglycosylation and nanoLC-MS/MS analysis. 4.4. Contact Angle Measurement The process involves the following steps: first, the compounds are milled and then it is compacted and uniformly deposited onto the substrate. The water drop with a defined volume is placed on the compounds, and then the photograph is taken and the angle between the tangent to the water drop and the substrate is measured. The angle can be measured manually with the software. 4.5. MALDI-TOF MS Analysis All MALDI-TOF MS analyses were performed on a Bruker autoflex speed time-of-flight mass spectrometer in a positive reflection mode with an Nd% YAG laser at 355 nm, a repetition rate of 200 Hz and an acceleration voltage of 20 kV in the m / z range of 2000–5200 and analyzed using flexAnalysis software (version 3.3). A volume of 1 μL eluent and 1 μL matrix solution (α-cyano-4-hydroxycinnamic, CHCA, 10 mg mL −1 ) were mixed and deposited on an AnchorChip standard MALDI plate for MALDI-TOF MS analysis. The detailed methods of LC–MS/MS are shown in . SCOF-2 was prepared according to previously reported works procedure with a slight modification . Typically, 63 mg (0.3 mmol) of triformylphloroglucinol (Tp) and 120.6 mg (0.45 mmol) of 2,5-diaminobenzene-1,4-disulfonic acid (Pa-(SO 3 H) 2 ) were added to a Pyrex tube in presence of 1.5 mL 1,4-dioxane, 1.5 mL mesitylene and 0.5 mL of 6 M aqueous acetic acid (AcOH). This mixture was then sonicated for 20 min to form a homogeneous suspension. The tube was subsequently flash-frozen under liquid nitrogen temperature (77 K) and degassed by three freeze–pump–thaw cycles. Then, the reaction mixture was sealed and heated at 120 °C for 72 h under a static condition. After cooling, the deep brown precipitate was collected by filtration and washed with copious amounts of dimethylacetamide, deionized water and anhydrous tetrahydrofuran. The material was then dried under vacuum at 120 °C for 12 h to obtain SCOF-2 as a deep brown powder. The detailed synthesis procedure of TpPa COF, SCOF-1, SCOF-3 and SCOF-4 are shown in . The detailed process of glycopeptides enrichment was described as follows. Firstly, 1.0 mg SCOF-2 was placed in a centrifuge tube and ultrasonically dispersed in 100 μL of loading buffer (90% ACN/0.1% TFA ( v / v ) for HRP tryptic digest or 90% ACN/1.5% TFA ( v / v ) for IgG tryptic digest). Then, 1.0 μL tryptic digest of standard protein was added to the centrifuge tube and the mixture was incubated at 37 °C for 30 min. After enrichment, SCOF-2 was then separated by centrifugation and washed three times with loading buffer to remove non-glycopeptides. Finally, 10 μL elution buffer (30% ACN/0.1% TFA ( v / v ) for HRP tryptic digest or 30% ACN/0.5% TFA ( v / v ) for IgG tryptic digest) was added to collect the adsorbed hydrophilic glycopeptides and analyzed using MALDI-TOF MS. Enrichment of glycopeptides from tryptic digests of human serum samples was similar to the above procedure. In the first place, lyophilized human serum digest was redissolved in 100 μL of loading buffer (90% ACN/0.1% TFA ( v / v )) and incubated with 1.0 mg of SCOF-2 for 30 min. Then, the mixture was centrifugated for 6 min to remove supernatant and washed with 100 μL of loading buffer three times to remove non-glycopeptides. Thereafter, the captured glycopeptides were eluted using 10 μL elution buffer (30% ACN/0.1% TFA ( v / v )) for 10 min. Finally, the collected solution was lyophilized and desalted for further deglycosylation and nanoLC-MS/MS analysis. The process involves the following steps: first, the compounds are milled and then it is compacted and uniformly deposited onto the substrate. The water drop with a defined volume is placed on the compounds, and then the photograph is taken and the angle between the tangent to the water drop and the substrate is measured. The angle can be measured manually with the software. All MALDI-TOF MS analyses were performed on a Bruker autoflex speed time-of-flight mass spectrometer in a positive reflection mode with an Nd% YAG laser at 355 nm, a repetition rate of 200 Hz and an acceleration voltage of 20 kV in the m / z range of 2000–5200 and analyzed using flexAnalysis software (version 3.3). A volume of 1 μL eluent and 1 μL matrix solution (α-cyano-4-hydroxycinnamic, CHCA, 10 mg mL −1 ) were mixed and deposited on an AnchorChip standard MALDI plate for MALDI-TOF MS analysis. The detailed methods of LC–MS/MS are shown in .
A transcriptomic atlas of mouse cerebellar cortex comprehensively defines cell types
3ee0a0ce-8f06-465d-928c-1b743cd5225a
8494635
Physiology[mh]
The cerebellar cortex is composed of the same basic circuit replicated thousands of times. Mossy fibres from many brain regions excite granule cells that in turn excite Purkinje cells (PCs), the sole outputs of the cerebellar cortex. Powerful climbing fibre synapses, which originate in the inferior olive, excite PCs and regulate synaptic plasticity. Additional circuit elements include inhibitory interneurons such as molecular layer interneurons (MLIs), Purkinje layer interneurons (PLIs), Golgi cells, excitatory unipolar brush cells (UBCs) and supportive Bergmann glia. There is a growing recognition that cerebellar circuits exhibit regional specializations, such as a higher density of UBCs or more prevalent PC feedback to granule cells in some lobules. Molecular variation across regions has also been identified, such as the parasagittal banding pattern of alternating PCs with high and low levels of Aldoc expression . However, the extent to which cells are molecularly specialized in different regions is poorly understood. Achieving a comprehensive survey of cell types in the cerebellum poses some unique challenges. First, a large majority of the neurons are granule cells, making it difficult to accurately sample the rarer types. Second, for many of the morphologically and physiologically defined cell types—especially the interneuron populations—existing molecular characterization is extremely limited. Recent advances in single-cell RNA sequencing (scRNA-seq) technology – have increased the throughput of profiling to enable the systematic identification of cell types and states throughout the central nervous system – . Several recent studies have harnessed such techniques to examine some cell types in the developing mouse cerebellum – , but none has yet comprehensively defined mature cell types in the adult. We developed a pipeline for high-throughput single-nucleus RNA-seq (snRNA-seq) with high transcript capture efficiency and nuclei yield, as well as consistent performance across regions of the adult mouse and post mortem human brain (10.17504/protocols.io.bck6iuze; Methods). To comprehensively sample cell types in the mouse cerebellum, we dissected and isolated nuclei from 16 different lobules, across both female and male replicates (Fig. , Extended Data Fig. , Methods). We recovered 780,553 nuclei profiles with a median transcript capture of 2,862 unique molecular identifiers (UMIs) per profile (Extended Data Fig. ), including 530,063 profiles from male donors, and 250,490 profiles from female donors, with minimal inter-individual batch effects (Extended Data Fig. ). To discover cell types, we used a previously developed clustering strategy (Methods) to partition 611,034 high-quality profiles into 46 clusters. We estimate that with this number of profiles, we can expect to sample even extremely rare cell types (prevalence of 0.15%) with a probability of greater than 90%, which suggests that we captured most transcriptional variation within the cerebellum (Extended Data Fig. ). We assigned each cluster to one of 18 known cell type identities on the basis of the expression of specific molecular markers that are known to correlate with defining morphological, histological and/or functional features (Fig. , Supplementary Table ). These annotations were also corroborated by the expected layer-specific localizations of marker genes in the Allen Brain Atlas (ABA) ( https://mouse.brain-map.org ) (Fig. ). Several cell types contained multiple clusters defined by differentially expressed markers, which suggests further heterogeneity within those populations (Extended Data Fig. , Supplementary Table ). To quantify the regional specialization of cell types, we examined how our clusters distributed proportionally across each lobule. We found that eight of our nine PC clusters, as well as several granule cell clusters and one Bergmann glial cluster, showed the most significantly divergent lobule compositions (Pearson’s chi-squared test, false discovery rate (FDR) < 0.001; Methods) and exhibited greater than twofold enrichment in at least one lobule (Fig. ). There was high concordance in the regional composition of each of these types across replicates, which indicates consistent spatial enrichment patterns (Extended Data Fig. ). The nine PC clusters could be divided into two main groups on the basis of their expression of Aldoc , which defines parasagittal striping of Purkinje neurons across the cerebellum . Seven of the nine PC clusters were Aldoc -positive, indicating greater specialization in this population compared with the Aldoc -negative PCs. Combinatorial expression of Aldoc and at least one subtype-specific marker fully identified the Purkinje clusters (Fig. ). These Aldoc -positive and Aldoc- negative groups showed a regional enrichment pattern that was consistent with the known paths of parasagittal stripes across individual lobules (Fig. ). When characterizing the spatial variation of the PC subtypes, we found some with spatial patterns that were recently identified using Slide-seq technology (Aldoc_1 and Aldoc_7, marked by Gpr176 and Tox2 , respectively) , as well as several undescribed subtypes and patterns (Fig. , Extended Data Fig. ). Most of this PC diversity was concentrated in the posterior cerebellum, particularly the uvula and nodulus, consistent with these regions showing greater diversity in both function and connectivity , . We also observed regional specialization in excitatory interneurons and Bergmann glia. Among the five granule cell subtypes (Fig. ), three displayed cohesive spatial enrichment patterns (subtypes 1, 2 and 3) (Fig. , Extended Data Fig. ). In addition, and consistent with previous work , the UBCs as a whole were highly enriched in the posterior lobules (Extended Data Fig. ). Finally, we identified a Bergmann glial subtype that expressed the marker genes Mybpc1 and Wif1 (Fig. ), with high enrichment in lobule VI, the uvula and nodulus (Fig. , Extended Data Fig. ). The regional specialization of interneuron and glial populations is in contrast to the cerebral cortex, where molecular heterogeneity across regions is largely limited to projection neurons , . Molecularly defined cell populations can be highly discrete—such as the distinctions between chandelier and basket interneuron types in the cerebral cortex —or they can vary more continuously, such as the cross-regional differences among principal cells of the striatum , and cortex , . The cerebellum is known to contain several canonical cell types that exist as morphological and functional continua, such as the basket and stellate interneurons of the molecular layer . To examine continuous features of molecular variation in greater detail within interneuron types, we created a metric to quantify and visualize the continuity of gene expression between two cell clusters. In brief, we fit a logistic curve for differentially expressed genes along the dominant expression trajectory , extracting the maximum slope ( m ) of the curve (Methods, Fig. ). We expect m values to be smaller for genes that are representative of more continuous expression variation (Fig. ). Our cluster analysis initially identified three populations of UBCs, similar in number to the two to four discrete types suggested by previous immunohistochemistry studies – . However, comparing m values across 200 highly variable genes within the UBC, Golgi cell and MLI populations suggested that in UBCs, many genes showed continuous variation (Fig. ), including Grm1 , Plcb4 , Calb2 and Plcb1 (Fig. ). Cross-species, integrative analysis with cerebellar cells derived from two post mortem human donors (Methods) revealed evolutionary conservation of the continuum (Extended Data Fig. ), with graded expression of many of the same genes, including Grm1 and Grm2 (Extended Data Fig. ). Functionally, UBCs have been classified on the basis of their response to mossy fibre activation. Discrete ON and OFF categories have previously been emphasized, although some properties of UBCs do not readily conform to these distinct categories , , . Here we focused on whether the molecular gradients in the expression of metabotropic receptors readily translated to a continuum of functional properties. We pressure-applied glutamate and measured the spiking responses of UBCs with on-cell recordings, and then measured glutamate-evoked currents in the cell (Methods). In some cells, glutamate rapidly and transiently increased spiking and evoked a long-lasting inward current (Fig. , top left). For other cells, glutamate transiently suppressed spontaneous firing and evoked an outward current (Fig. , bottom left). Many UBCs, however, had more complex, mixed responses to glutamate; we refer to these as ‘biphasic’ cells. In one cell, for example, glutamate evoked a delayed increase in firing, caused by an initial outward current followed by a longer lasting inward current (Fig. , middle left). A summary of the glutamate-evoked currents (Fig. , right) suggests that the graded nature of the molecular properties of UBCs may lead to graded electrical response properties. To link the functional and molecular continua more directly, we recorded from cells treated with agonists of mGluR1 ( Grm1 ) or mGluR2 ( Grm2 ) (Fig. ). Responses were graded across the UBC population, with a significant number of cells that responded to both agonists (Fig. , Extended Data Fig. ). This suggests that the biphasic response profile probably corresponds to the molecular continuum defined by snRNA-seq. Further studies are needed to determine the relationship between these diverse responses to applied agonists, and the responses of the cells to mossy fibre activation. MLIs are spontaneously active interneurons that inhibit PCs as well as other MLIs. MLIs are canonically subdivided into stellate cells located in the outer third of the molecular layer, and basket cells located in the inner third of the molecular layer that synapse onto PC somata and form specialized contacts known as pinceaus, which ephaptically inhibit PCs. Many MLIs, particularly those in the middle of the molecular layer, share morphological features with both basket and stellate cells . Thus, MLIs are thought to represent a single functional and morphological continuum. Our clustering analysis of MLIs and PLIs, by contrast, identified two discrete populations of MLIs. The first population, ‘MLI1’, uniformly expressed Lypd6 , Sorcs3 and Ptprk (Figs. c, ). The second population, ‘MLI2’, was highly molecularly distinct from MLI1, and expressed numerous markers that are also found in PLIs, such as Nxph1 and Cdh22 (Fig. ). Single-molecule fluorescence in situ hybridization (smFISH) experiments with Sorcs3 and Nxph1 showed that the markers were entirely mutually exclusive (Fig. ). A cross-species analysis with 14,971 human MLI and PLI profiles demonstrated that the MLI1 and MLI2 distinction is evolutionarily conserved (Extended Data Fig. ). To examine the developmental specification of these two populations, we clustered 79,373 total nuclei from peri- and postnatal mice across several time points (ranging from embryonic day (E) 18 to postnatal day (P) 16). From a cluster of 5,519 GABA (γ-aminobutyric acid)-producing neuron progenitors, marked by the expression of canonical markers Tfap2b , Ascl1 and Pax2 , (Methods, Extended Data Fig. ), we were able to distinguish developmental trajectories that corresponded to the MLI1 ( Sorcs3 -positive) and MLI2 ( Nxph1 -positive, Klhl1- negative) populations, with differentiation of the two types beginning at P4 and largely complete by P16 (Fig. , Extended Data Fig. ). Although both populations originate from a single group of progenitors, trajectory analysis revealed several lineage-specific markers (Extended Data Fig. ). Among the MLI2 trajectory markers, we identified genes such as Fam135b , the expression of which persisted into adulthood, and Fos , which is only transiently differentially expressed between the MLI1 and MLI2 trajectories (Fig. , Extended Data Fig. ). This high expression of several immediate early genes (Extended Data Fig. ) selectively in early MLI2 cells could indicate that differential activity is associated with MLI2 specification. MLI1s and MLI2s were present throughout the entire molecular layer, which indicates that the distinction between MLI1 and MLI2 does not correspond to the canonical basket and stellate distinction (Extended Data Fig. ). To understand the morphological, physiological and molecular characteristics of the MLI populations better, we developed a pipeline to record from individual MLIs in brain slices, image their morphologies, and then ascertain their molecular MLI1 and MLI2 identities by smFISH (Methods, Fig. ). Consistent with the marker analysis (Fig. ), MLI1s had a stellate morphology in the distal third of the molecular layer, whereas MLI1s located near the PC layer had a basket morphology, with contacts near PC initial segments (Fig. , Extended Data Fig. ). We next examined whether MLI2s, in which we could not identify systematic molecular heterogeneity, had graded morphological properties. MLI2s in the distal third of the molecular layer also had stellate cell morphology, whereas MLI2s near the PC layer had a distinct morphology and appeared to form synapses preferentially near the PC layer (Extended Data Fig. ). Although further studies are needed to determine whether MLI2s form pinceaus, it is clear that both MLI1 and MLI2 showed a similar continuum in their morphological properties. The electrical characteristics of MLI1s and MLI2s also showed numerous distinctions. The average spontaneous firing rate was significantly higher for MLI1s than for MLI2s (Mann–Whitney test, P = 0.0015) (Fig. ), and the membrane resistance ( R m ) of MLI1s was lower than that of MLI2s (Fig. ). In addition, we found that MLI2s were more excitable than MLI1s (Fig. ), and displayed a stronger hyperpolarization-activated current (Extended Data Fig. ). MLIs are known to be electrically coupled via gap junctions , but it is not clear whether this is true for both MLI1s and MLI2s. In the cerebral cortex and some other brain regions, interneurons often electrically couple selectively to neurons of the same type, but not other types , . We therefore examined whether this also applies to MLI1s and MLI2s. The expression of Gjd2 , the gene encoding the dominant gap junction protein in MLIs , was found in MLI1s but not MLI2s, both in our single-nucleus data (Fig. ) and by smFISH (Fig. ), which suggests potential differences in electrical coupling. Notably, the two clusters of Golgi cells, another interneuron type known to be electrically coupled , , differentially expressed many of the same markers, including Sorcs3 , Gjd2 and Nxph1 in both human and mouse (Extended Data Figs. e, f, ). Action potentials in coupled MLIs produce small depolarizations known as spikelets that are thought to promote synchronous activity between MLIs . We therefore investigated whether spikelets are present in MLI1s and absent in MLI2s. Consistent with the gene expression profile, we observed spikelets in 71% of MLI1s and not in MLI2s (Fig. ; P < 0.001, Fisher’s exact test). These findings suggest that most MLI1s are coupled to other MLI1s by gap junctions, whereas MLI2s show no electrical coupling to other MLIs. In this Article, we used high-throughput, region-specific transcriptome sampling to build a comprehensive taxonomy of cell types in the mouse cerebellar cortex, and quantify spatial variation across individual regions. Our joint analyses with post mortem human samples indicated that the neuronal populations defined in mouse were generally conserved in human (Extended Data Fig. ), consistent with a recent comparative analysis in the cerebral cortex . We find considerably more regional specialization in PCs—especially in posterior lobules—than was previously recognized. These PC subtypes overlap with greater local abundances in UBCs and in distinct specializations in granule cells, which indicates a higher degree of regional circuit heterogeneity than previously thought. Our dataset is freely available to the neuroscience community ( https://portal.nemoarchive.org/ ; https://singlecell.broadinstitute.org ), facilitating functional characterization of these populations, many of which are entirely novel. One of the biggest challenges facing the comprehensive cell typing of the brain is the correspondence problem : how to integrate definitions of cell types on the basis of the many modalities of measurement used to characterize brain cells. We found success by first defining populations using systematic molecular profiling, and then relating these populations to physiological and morphological features using targeted, joint analyses of individual cells. We were surprised that the cerebellar MLIs—one of the first sets of neurons to be characterized more than 130 years ago —are in fact composed of two molecularly and physiologically discrete populations, that each shows a similar morphological continuum along the depth axis of the molecular layer. As comprehensive cell typing proceeds across other brain regions, we expect the emergence of similar basic discoveries that challenge and extend our understanding of cellular specialization in the nervous system. Animals Nuclei suspensions for mouse (C57BL/6J, Jackson Labs) cerebellum profiles were generated from 2 female and 4 male adult mice (60 days old), 1 male E18 mouse, 1 male P0 (newborn) mouse, 1 female P4 (4 days old) mouse, 1 female P8, 2 male P12 and 2 female P16 mice. Adult mice were group-housed with a 12-h light-dark schedule and allowed to acclimate to their housing environment for two weeks after arrival. Timed pregnant mice were received and euthanized to yield E18 mice 6 days after arrival. Newborn mice were housed as individual litters for up to 16 days. All experiments were approved by and in accordance with Broad IACUC protocol number 012-09-16. Brain preparation At E18, P0, P4, P8, P12, P16 and P60, C57BL/6J mice were anaesthetized by administration of isoflurane in a gas chamber flowing 3% isoflurane for 1 min. Anaesthesia was confirmed by checking for a negative tail and paw pinch response. Mice were moved to a dissection tray and anaesthesia was prolonged via a nose cone flowing 3% isoflurane for the duration of the procedure. Transcardial perfusions were performed on adult, pregnant (E18), P8, P12 and P16 mice with ice-cold pH 7.4 HEPES buffer containing 110 mM NaCl, 10 mM HEPES, 25 mM glucose, 75 mM sucrose, 7.5 mM MgCl 2 , and 2.5 mM KCl to remove blood from the brain. P0 and P4 mice were unperfused. The brain was removed from P60, P8, P12 and P16 mice and frozen for 3 min in liquid nitrogen vapour. E18, P0 and P4 mice were sagittally bisected after similarly freezing their brains in situ. All tissue was moved to −80 °C for long-term storage. A detailed protocol is available at protocols.io (10.17504/protocols.io.bcbrism6). Generation of cerebellar nuclei profiles Frozen adult mouse brains were securely mounted by the frontal cortex onto cryostat chucks with OCT embedding compound such that the entire posterior half including the cerebellum and brainstem were left exposed and thermally unperturbed. Dissection of each of 16 cerebellar vermal and cortical lobules was performed by hand in the cryostat using an ophthalmic microscalpel (Feather safety Razor P-715) pre-cooled to −20 °C and donning four surgical loupes. Whole E18, P0, P4, P8, P12 and P16 mouse cerebella were similarly curated by dissecting rhombomeric cerebellar rudiments from sagittal frozen brain hemispheres using a pre-cooled 1-mm disposable biopsy punch (Integra Miltex). Each excised tissue dissectate was placed into a pre-cooled 0.25 ml PCR tube using pre-cooled forceps and stored at −80 °C. Nuclei were extracted from this frozen tissue using gentle, detergent-based dissociation, according to a protocol available at protocols.io (10.17504/protocols.io.bck6iuze) adapted from one provided by the McCarroll laboratory (Harvard Medical School), and loaded into the 10x Chromium V3 system. Reverse transcription and library generation were performed according to the manufacturer’s protocol. Floating slice hybridization chain reaction on acute slices Acute cerebellar slices containing Alexa 594-filled patched cells were fixed as described and stored in 70% ethanol at 4 °C until hybridization chain reaction (HCR). They were then subjected to a ‘floating slice HCR’ protocol in which the recorded cells could be simultaneously re-imaged in conjunction with HCR expression analysis in situ and catalogued as to their positions in the cerebellum. A detailed protocol (10.17504/protocols.io.bck7iuzn) was performed using the following HCR probes and matching hairpins purchased from Molecular Instruments: glutamate metabotropic receptor 8 ( Grm8 ) lot number PRC005, connexin 36 ( Gjd2 ) lot number PRD854 and PRA673, cadherin22 ( Cdh22 ) lot number PRC011, neurexophilin 1 ( Nxph1 ) lot number PRC675 and PRC466, leucine-rich glioma-inactivated protein 2 ( Lgi2 ) lot number PRC012, somatostatin ( Sst ) lot number PRA213 and sortilin related VPS10 domain containing receptor 3 ( Sorcs3 ) lot number PRC004. Amplification hairpins used were type B1, B2 and B3 in 488 nm, 647 nm and 546 nm respectively. Patch fill and HCR co-imaging After floating-slice HCR, slices were mounted between no.1 coverslips with antifade compound (ProLong Glass, Invitrogen) and images were collected on an Andor CSU-X spinning disk confocal system coupled to a Nikon Eclipse Ti microscope equipped with an Andor iKon-M camera. The images were acquired with an oil immersion objective at 60×. The Alexa 594 patched cell backfill channel (561 nm) plus associated HCR probe/hairpin channels (488 nm and 647 nm) were projected through a 10–20-μm thick z -series so that an unambiguous determination of the association between the patch-filled cell and its HCR gene expression could be made. Images were processed using Nikon NIS Elements 4.4 and Nikon NIS AR. Human brain and nuclei processing Human donor tissue was supplied by the Human Brain and Spinal Fluid Resource Center at UCLA, through the NIH NeuroBioBank. This work was determined by the Office of Research Subjects Protection at the Broad Institute not to meet the definition of human subjects research (project ID NHSR-4235). Nuclei suspensions from human cerebellum were generated from two neuropathologically normal control cases—one female tissue donor, aged 35, and one male tissue donor, aged 36. These fresh frozen tissues had post mortem intervals of 12 and 13.5 h respectively, and were provided as whole cerebella cut into four coronal slabs. A sub-dissection of frozen cerebellar lobules was performed on dry ice just before 10x processing and nuclei were extracted from this frozen tissue using gentle, detergent-based dissociation, according to a protocol available at protocols.io (10.17504/protocols.io.bck6iuze). Electrophysiology experiments Acute parasagittal slices were prepared at 240-μm thickness from wild-type mice aged P30–P50. Mice were anaesthetized with an intraperitoneal injection of ketamine (10 mg kg −1 ), perfused transcardially with an ice-cold solution containing (in mM): 110 choline chloride, 7 MgCl 2 , 2.5 KCl, 1.25 NaH 2 PO 4 , 0.5 CaCl 2 , 25 glucose, 11.5 sodium ascorbate, 3 sodium pyruvate, 25 NaHCO 3 , 0.003 ( R )-CPP, equilibrated with 95% O 2 and 5% CO 2 . Slices were cut in the same solution and were then transferred to artificial cerebrospinal fluid (ACSF) containing (in mM) 125 NaCl, 26 NaHCO 3 , 1.25 NaH 2 PO 4 , 2.5 KCl, 1 MgCl 2 , 1.5 CaCl 2 and 25 glucose equilibrated with 95% O 2 and 5% CO 2 at approximately 34 °C for 30 min. Slices were then kept at room temperature until recording. All UBC recordings were done at 34–36 °C with (in μM) 2 ( R )-CPP, 5 NBQX, 1 strychnine, 10 SR95531 (gabazine) and 1.5 CGP in the bath to isolate metabotropic currents. Loose cell-attached recordings were made with ACSF-filled patch pipettes of 3–5 MΩ resistance. Whole-cell voltage-clamp recordings were performed while holding the cell at −70 mV with an internal containing (in mM): 140 KCl, 4 NaCl, 0.5 CaCl 2 , 10 HEPES, 4 MgATP, 0.3 NaGTP, 5 EGTA 5, and 2 QX-314, pH adjusted to 7.2 with KOH. Brief puffs of glutamate (1 mM for 50 ms at 5 psi) were delivered using a Picospritzer II (General Valve Corp.) in both cell-attached and whole-cell configuration to assure consistent responses. The heat map of current traces from all cells are sorted by the score over the first principal axis after singular value decomposition (SVD) of recordings over all cells. For whole-cell recordings with pharmacology, we used an K-methanesulfonate internal containing (in mM): 122 K-methanesulfonate, 9 NaCl, 9 HEPES, 0.036 CaCl 2 , 1.62 MgCl 2 , 4 MgATP, 0.3 GTP (Tris salt), 14 creatine phosphate (Tris salt), and 0.18 EGTA, pH 7.4. A junction potential of −8 mV was compensated for during recording. 300nM TTX was added to the ACSF in conjunction with the synaptic blockers listed above. Three pipettes filled with ACSF containing 1 mM glutamate, 100 μM DHPG or 1 μM LY354740 were positioned within 20 μm of the recorded cell. Pressure applications of each agonist were delivered at 10 psi with durations of 40–50 ms. Agonist applications were separated by 30 s. Two to three trials were collected for each agonist. MLI recordings were performed at approximately 32 °C with an internal solution containing (in mM) 150 K-gluconate, 3 KCl, 10 HEPES, 3 MgATP, 0.5 GTP, 5 phosphocreatine-tris 2 and 5 phosphocreatine-Na 2 , 2 mg ml −1 biocytin and 0.1 Alexa 594 (pH adjusted to 7.2 with KOH, osmolality adjusted to 310 mOsm kg −1 ). Visually guided whole-cell recordings were obtained with patch pipettes of around 4 MΩ resistance pulled from borosilicate capillary glass (BF150-86-10, Sutter Instrument). Electrophysiology data was acquired using a Multiclamp 700B amplifier (Axon Instruments), digitized at 20 kHz and filtered at 4 kHz. For isolating spikelets in MLI recordings, cells were held at −65 mV in voltage clamp and the following receptor antagonists were added to the solution (in μM) to block synaptic currents: 2 ( R )-CPP, 5 NBQX, 1 strychnine, 10 SR95531 (gabazine), 1.5 CGP. All drugs were purchased from Abcam and Tocris. To obtain an input-output curve, MLIs were maintained at 60–65 mV with a constant hyperpolarizing current, and 250 ms current steps ranging from −30 pA to +100 pA were injected in 10 pA increments. To activate the hyperpolarization-evoked current ( I h ), MLIs were held at −65 mV and a 30 pA hyperpolarizing current step of 500 ms duration was injected. The amplitude of I h was calculated as the difference between the maximal current evoked by the hyperpolarizing current step and the average steady-state current at the end (480–500 ms) of the current step. Capacitance and input resistance ( R i ) were determined using a 10 pA, 50 ms hyperpolarizing current step. To prevent excessive dialysis and to ensure successful detection of mRNAs in the recorded cells, the total duration of recordings did not exceed 10 min. Acquisition and analysis of electrophysiological data were performed using custom routines written in MATLAB (Mathworks), IgorPro (Wavemetrics), or AxoGraphX. Data are reported as median ± interquartile range, and statistical analysis was carried out using the Mann–Whitney or Fisher’s exact test, as indicated. Statistical significance was assumed at P < 0.05. To determine the presence of spikelets, peak detection was used to generate event-triggered average waveforms with thresholds based on the mean absolute deviation (MAD) of the raw trace. Spikelet recordings were scored for the presence of spikelets blind to the molecular identity of the cells. The analysis was restricted to cells recorded in the presence of synaptic blockers. Imaging and analysis MLIs were filled with 100 μM Alexa-594 via patch pipette to visualize their morphology using two-photon imaging. After completion of the electrophysiological recordings the patch electrode was retracted slowly and the cell resealed. We used a custom-built two-photon laser-scanning microscope with a 40×, 0.8 numerical aperture (NA) objective (Olympus Optical) and a pulsed two-photon laser (Chameleon or MIRA 900, Coherent, 800 nm excitation). DIC images were acquired at the end of each experiment and locations of each cell within the slice were recorded. Two-photon images were further processed in ImageJ. Tissue fixation of acute slices After recording and imaging, cerebellar slices were transferred to a well-plate and submerged in 2–4% PFA in PBS (pH 7.4) and incubated overnight at 4 °C. Slices were then washed in PBS (3 × 5 min) and then kept in 70% ethanol in RNase-free water until HCR was performed. Preprocessing of sequencing reads Sequencing reads from mouse cerebellum experiments were demultiplexed and aligned to a mouse (mm10) premrna reference using CellRanger v3.0.2 with default settings. Digital gene expression matrices were generated with the CellRanger count function. Sequencing reads from human cerebellum experiments were demultiplexed and aligned to a human (hg19) premrna reference using the Drop-seq alignment workflow , which was also used to generate the downstream digital gene expression matrices. Estimation of adequate rare cell type detection To estimate the probability of sufficiently sampling rare cell types in the cerebellum as a function of total number of nuclei sampled, we used the approach proposed by the Satija laboratory ( https://satijalab.org/howmanycells ), with the assumption of at most 10 very rare cell types, each with a prevalence of 0.15%. We derived this minimum based on the observed prevalences of the two rarest cell types we identified (OPC_4, Purkinje_Aldoc_2). We set 70 cells as the threshold for sufficient sampling, and calculated the overall probability as a negative binomial (NB) density: [12pt]{minimal} $${}{(k;n,p)}^{m}$$ NB ( k ; n , p ) m in which k = 70, P = 0.0015, m = 10, and n represents the total number of cells sampled. Cell type clustering and annotation After generation of digital gene expression matrices as described above, we filtered out nuclei with fewer than 500 UMIs. We then performed cell type annotation iteratively through a number of rounds of dimensionality reduction, clustering, and removal of putative doublets and cells with high mitochondrial expression. For the preliminary clustering step, we performed standard preprocessing (UMI normalization, highly variable gene selection, scaling) with Seurat v2.3.4 as previously described . We used principal component analysis (PCA) with 30 components and Louvain community detection with resolution 0.1 to identify major clusters (resulting in 34 clusters). At this stage, we merged several clusters (primarily granule cell clusters) based on shared expression of canonical cell type markers, and removed one cluster whose top differentially expressed genes were mitochondrial (resulting in 11 clusters). For subsequent rounds of cluster annotation within these major cell type clusters, we applied a variation of the LIGER workflow previously described , using integrative non-negative matrix factorization (iNMF) to limit the effects of sample- and sex-specific gene expression. In brief, we normalized each cell by the number of UMIs, selected highly variable genes and spatially variable genes (see section below), performed iNMF, and clustered using Louvain community detection (omitting the quantile normalization step). Clusters whose top differentially expressed genes indicated contamination from a different cell type or high expression of mitochondrial genes were removed during the annotation process, and not included in subsequent rounds of annotation. This iterative annotation process was repeated until no contaminating clusters were identified in a round of clustering. Differential expression analysis within rounds of annotation was performed with the Wilcoxon rank sum test using Seurat’s FindAllMarkers function. Comprehensive differential expression analysis across all 46 final annotated clusters was performed using the wilcoxauc function from the presto package . A full set of parameters used in the LIGER annotation steps and further details can be found in Supplementary Table . For visualization as in Fig. , we merged all annotated high-quality nuclei and repeated preliminary preprocessing steps before performing UMAP using 25 principal components. Integrated analysis of human and mouse data After generation of digital gene expression matrices for the human nuclei profiles, we filtered out nuclei with fewer than 500 UMIs. We then performed a preliminary round of cell type annotation using the standard LIGER workflow (integrating across batches) to identify the primary human interneuron populations (UBCs, MLIs and PLIs, Golgi cells, granule cells; based on the same markers as in Supplementary Table ). We repeated an iteration of the same workflow for the four cell populations specified above (with an additional quantile normalization step) in order to identify and remove putative doublet and artefactual populations. Finally, we performed iNMF metagene projection as previously described to project the human datasets into latent spaces derived from the corresponding mouse cell type datasets. We then performed quantile normalization and Louvain clustering, assigning joint clusters based on the previously annotated mouse data clusters. For the granule cell joint analysis, we first limited the mouse data to include only the five cerebellar regions sampled in human data collection (lobules II, VII, VIII, IX and X). For the Golgi cell joint analysis, we performed iNMF (integrating across species), instead of metagene projection. Spatially variable gene selection To identify genes with high regional variance, we first computed the log of the index of dispersion (log variance-to-mean ratio, logVMR) for each gene, across each of the 16 lobular regions. Next, we simulated a Gaussian null distribution whose centre was the logVMR mode, found by performing a kernel density estimation of the logVMRs (using the density function in R, followed by the turnpoints function). The standard deviation of the Gaussian was computed by reflecting the values less than the mode across the centre. Genes whose logVMRs were in the upper tail with P < 0.01 (Benjamini–Hochberg adjusted) were ruled as spatially variable. For the granule cell and PC cluster analyses, adjusted P -value thresholds were set to 0.001 and 0.002, respectively. Cluster regional composition testing and lobule enrichment To determine whether the lobule composition of a cluster differs significantly from the corresponding outer level cell type lobule distribution, we used a multinomial test approximated by Pearson’s chi-squared test with k − 1 degrees of freedom, in which k was the total number of lobules sampled (16). The expected number of nuclei for a cluster i and lobule j was estimated as follows: [12pt]{minimal} $${E}_{ij}={N}_{i} _{j}}{{ }_{j}{N}_{j}}$$ E i j = N i × N j ∑ j N j where N i is the total number of nuclei in cluster i and N j is the number of nuclei in lobule j (across all clusters in the outer level cell type, as defined below). The resulting P values were FDR-adjusted (Benjamini–Hochberg) using the p.adjust function in R. Lobule enrichment (LE) scores for each cluster i and each lobule j were calculated by: [12pt]{minimal} $${{}}_{ij}=\,_{ij}}{{ }_{j}{n}_{ij}}}{_{j}}{{ }_{j}{N}_{j}}}$$ LE i j = n i j ∑ j n i j N j ∑ j N j in which n ij is the observed number of nuclei in cluster i and lobule j , and N j is the number of nuclei in lobule j (across all clusters in the outer level cell type). For this analysis, we used coarse cell type definitions shown coloured in the Fig. , and merged the PLI clusters. For lobule composition testing and replicate consistency analysis below, we downsampled granule cells to 60,000 nuclei (the next most numerous cell type were the MLI and PLI clusters with 45,767 nuclei). To determine the consistency of lobule enrichment scores across replicates in each region, we designated two sets of replicates by assigning nuclei from the most represented replicate in each region and cluster analysis to ‘replicate 1’ and nuclei from the second most represented replicate in each region to ‘replicate 2’. This assignment was used because not all regions had representation from all individuals profiled, and some had representation from only two individuals. We calculated lobule enrichment scores for each cluster using each of the replicate sets separately; we then calculated the Pearson correlation between the two sets of lobule enrichment scores for each cluster. We would expect correlation to be high for clusters when lobule enrichment is biologically consistent. We note that one cluster (Purkinje_Aldoc_2), was excluded from the replicate consistency analysis as under this design, it had representation from only a single aggregated replicate. However, we confirmed that lobule enrichment for this cluster was strongly consistent with Allen Brain Atlas expression staining (Extended Data Fig. ). Continuity of gene expression To characterize molecular variation across cell types, we attempted to quantify the continuity of scaled gene expression across a given cell type pair, ordered by pseudotime rank (calculated using Monocle2). For each gene, we fit a logistic curve to the scaled gene expression values and calculated the maximum slope ( m ) of the resulting curve, after normalizing for both the number of cells and dynamic range of the logistic fit. To limit computational complexity, we downsampled cell type pairs to 5,000 total nuclei. We fit curves and computed m values for the most significantly differentially expressed genes across five cell type pairs (Fig. ). Differentially expressed genes were determined using Seurat’s FindMarkers function. We then plotted the cumulative distribution of m values for the top 200 genes for each cell type pair; genes were selected based on ordering by absolute Spearman correlation between scaled gene expression and pseudotime rank. Trajectory analysis of peri- and postnatal mouse cerebellum data After generation of digital gene expression matrices for the peri- and postnatal mouse profiles, we filtered out nuclei with fewer than 500 UMIs. We applied the LIGER workflow (similarly to the adult mouse data analysis), to identify clusters corresponding to major developmental pathways. We then isolated the cluster corresponding to GABAergic progenitors (marked by expression of Tfap2b and other canonical markers). We performed a second iteration of LIGER iNMF and Louvain clustering on this population and generated a UMAP representation. Using this UMAP representation, we calculated pseudotime ordering and a corresponding trajectory graph with Monocle3 . To identify modules of genes which varied along the computed trajectory, we used the graph_test and find_gene_modules functions from Monocle3. Reporting summary Further information on research design is available in the linked to this paper. Nuclei suspensions for mouse (C57BL/6J, Jackson Labs) cerebellum profiles were generated from 2 female and 4 male adult mice (60 days old), 1 male E18 mouse, 1 male P0 (newborn) mouse, 1 female P4 (4 days old) mouse, 1 female P8, 2 male P12 and 2 female P16 mice. Adult mice were group-housed with a 12-h light-dark schedule and allowed to acclimate to their housing environment for two weeks after arrival. Timed pregnant mice were received and euthanized to yield E18 mice 6 days after arrival. Newborn mice were housed as individual litters for up to 16 days. All experiments were approved by and in accordance with Broad IACUC protocol number 012-09-16. At E18, P0, P4, P8, P12, P16 and P60, C57BL/6J mice were anaesthetized by administration of isoflurane in a gas chamber flowing 3% isoflurane for 1 min. Anaesthesia was confirmed by checking for a negative tail and paw pinch response. Mice were moved to a dissection tray and anaesthesia was prolonged via a nose cone flowing 3% isoflurane for the duration of the procedure. Transcardial perfusions were performed on adult, pregnant (E18), P8, P12 and P16 mice with ice-cold pH 7.4 HEPES buffer containing 110 mM NaCl, 10 mM HEPES, 25 mM glucose, 75 mM sucrose, 7.5 mM MgCl 2 , and 2.5 mM KCl to remove blood from the brain. P0 and P4 mice were unperfused. The brain was removed from P60, P8, P12 and P16 mice and frozen for 3 min in liquid nitrogen vapour. E18, P0 and P4 mice were sagittally bisected after similarly freezing their brains in situ. All tissue was moved to −80 °C for long-term storage. A detailed protocol is available at protocols.io (10.17504/protocols.io.bcbrism6). Frozen adult mouse brains were securely mounted by the frontal cortex onto cryostat chucks with OCT embedding compound such that the entire posterior half including the cerebellum and brainstem were left exposed and thermally unperturbed. Dissection of each of 16 cerebellar vermal and cortical lobules was performed by hand in the cryostat using an ophthalmic microscalpel (Feather safety Razor P-715) pre-cooled to −20 °C and donning four surgical loupes. Whole E18, P0, P4, P8, P12 and P16 mouse cerebella were similarly curated by dissecting rhombomeric cerebellar rudiments from sagittal frozen brain hemispheres using a pre-cooled 1-mm disposable biopsy punch (Integra Miltex). Each excised tissue dissectate was placed into a pre-cooled 0.25 ml PCR tube using pre-cooled forceps and stored at −80 °C. Nuclei were extracted from this frozen tissue using gentle, detergent-based dissociation, according to a protocol available at protocols.io (10.17504/protocols.io.bck6iuze) adapted from one provided by the McCarroll laboratory (Harvard Medical School), and loaded into the 10x Chromium V3 system. Reverse transcription and library generation were performed according to the manufacturer’s protocol. Acute cerebellar slices containing Alexa 594-filled patched cells were fixed as described and stored in 70% ethanol at 4 °C until hybridization chain reaction (HCR). They were then subjected to a ‘floating slice HCR’ protocol in which the recorded cells could be simultaneously re-imaged in conjunction with HCR expression analysis in situ and catalogued as to their positions in the cerebellum. A detailed protocol (10.17504/protocols.io.bck7iuzn) was performed using the following HCR probes and matching hairpins purchased from Molecular Instruments: glutamate metabotropic receptor 8 ( Grm8 ) lot number PRC005, connexin 36 ( Gjd2 ) lot number PRD854 and PRA673, cadherin22 ( Cdh22 ) lot number PRC011, neurexophilin 1 ( Nxph1 ) lot number PRC675 and PRC466, leucine-rich glioma-inactivated protein 2 ( Lgi2 ) lot number PRC012, somatostatin ( Sst ) lot number PRA213 and sortilin related VPS10 domain containing receptor 3 ( Sorcs3 ) lot number PRC004. Amplification hairpins used were type B1, B2 and B3 in 488 nm, 647 nm and 546 nm respectively. After floating-slice HCR, slices were mounted between no.1 coverslips with antifade compound (ProLong Glass, Invitrogen) and images were collected on an Andor CSU-X spinning disk confocal system coupled to a Nikon Eclipse Ti microscope equipped with an Andor iKon-M camera. The images were acquired with an oil immersion objective at 60×. The Alexa 594 patched cell backfill channel (561 nm) plus associated HCR probe/hairpin channels (488 nm and 647 nm) were projected through a 10–20-μm thick z -series so that an unambiguous determination of the association between the patch-filled cell and its HCR gene expression could be made. Images were processed using Nikon NIS Elements 4.4 and Nikon NIS AR. Human donor tissue was supplied by the Human Brain and Spinal Fluid Resource Center at UCLA, through the NIH NeuroBioBank. This work was determined by the Office of Research Subjects Protection at the Broad Institute not to meet the definition of human subjects research (project ID NHSR-4235). Nuclei suspensions from human cerebellum were generated from two neuropathologically normal control cases—one female tissue donor, aged 35, and one male tissue donor, aged 36. These fresh frozen tissues had post mortem intervals of 12 and 13.5 h respectively, and were provided as whole cerebella cut into four coronal slabs. A sub-dissection of frozen cerebellar lobules was performed on dry ice just before 10x processing and nuclei were extracted from this frozen tissue using gentle, detergent-based dissociation, according to a protocol available at protocols.io (10.17504/protocols.io.bck6iuze). Acute parasagittal slices were prepared at 240-μm thickness from wild-type mice aged P30–P50. Mice were anaesthetized with an intraperitoneal injection of ketamine (10 mg kg −1 ), perfused transcardially with an ice-cold solution containing (in mM): 110 choline chloride, 7 MgCl 2 , 2.5 KCl, 1.25 NaH 2 PO 4 , 0.5 CaCl 2 , 25 glucose, 11.5 sodium ascorbate, 3 sodium pyruvate, 25 NaHCO 3 , 0.003 ( R )-CPP, equilibrated with 95% O 2 and 5% CO 2 . Slices were cut in the same solution and were then transferred to artificial cerebrospinal fluid (ACSF) containing (in mM) 125 NaCl, 26 NaHCO 3 , 1.25 NaH 2 PO 4 , 2.5 KCl, 1 MgCl 2 , 1.5 CaCl 2 and 25 glucose equilibrated with 95% O 2 and 5% CO 2 at approximately 34 °C for 30 min. Slices were then kept at room temperature until recording. All UBC recordings were done at 34–36 °C with (in μM) 2 ( R )-CPP, 5 NBQX, 1 strychnine, 10 SR95531 (gabazine) and 1.5 CGP in the bath to isolate metabotropic currents. Loose cell-attached recordings were made with ACSF-filled patch pipettes of 3–5 MΩ resistance. Whole-cell voltage-clamp recordings were performed while holding the cell at −70 mV with an internal containing (in mM): 140 KCl, 4 NaCl, 0.5 CaCl 2 , 10 HEPES, 4 MgATP, 0.3 NaGTP, 5 EGTA 5, and 2 QX-314, pH adjusted to 7.2 with KOH. Brief puffs of glutamate (1 mM for 50 ms at 5 psi) were delivered using a Picospritzer II (General Valve Corp.) in both cell-attached and whole-cell configuration to assure consistent responses. The heat map of current traces from all cells are sorted by the score over the first principal axis after singular value decomposition (SVD) of recordings over all cells. For whole-cell recordings with pharmacology, we used an K-methanesulfonate internal containing (in mM): 122 K-methanesulfonate, 9 NaCl, 9 HEPES, 0.036 CaCl 2 , 1.62 MgCl 2 , 4 MgATP, 0.3 GTP (Tris salt), 14 creatine phosphate (Tris salt), and 0.18 EGTA, pH 7.4. A junction potential of −8 mV was compensated for during recording. 300nM TTX was added to the ACSF in conjunction with the synaptic blockers listed above. Three pipettes filled with ACSF containing 1 mM glutamate, 100 μM DHPG or 1 μM LY354740 were positioned within 20 μm of the recorded cell. Pressure applications of each agonist were delivered at 10 psi with durations of 40–50 ms. Agonist applications were separated by 30 s. Two to three trials were collected for each agonist. MLI recordings were performed at approximately 32 °C with an internal solution containing (in mM) 150 K-gluconate, 3 KCl, 10 HEPES, 3 MgATP, 0.5 GTP, 5 phosphocreatine-tris 2 and 5 phosphocreatine-Na 2 , 2 mg ml −1 biocytin and 0.1 Alexa 594 (pH adjusted to 7.2 with KOH, osmolality adjusted to 310 mOsm kg −1 ). Visually guided whole-cell recordings were obtained with patch pipettes of around 4 MΩ resistance pulled from borosilicate capillary glass (BF150-86-10, Sutter Instrument). Electrophysiology data was acquired using a Multiclamp 700B amplifier (Axon Instruments), digitized at 20 kHz and filtered at 4 kHz. For isolating spikelets in MLI recordings, cells were held at −65 mV in voltage clamp and the following receptor antagonists were added to the solution (in μM) to block synaptic currents: 2 ( R )-CPP, 5 NBQX, 1 strychnine, 10 SR95531 (gabazine), 1.5 CGP. All drugs were purchased from Abcam and Tocris. To obtain an input-output curve, MLIs were maintained at 60–65 mV with a constant hyperpolarizing current, and 250 ms current steps ranging from −30 pA to +100 pA were injected in 10 pA increments. To activate the hyperpolarization-evoked current ( I h ), MLIs were held at −65 mV and a 30 pA hyperpolarizing current step of 500 ms duration was injected. The amplitude of I h was calculated as the difference between the maximal current evoked by the hyperpolarizing current step and the average steady-state current at the end (480–500 ms) of the current step. Capacitance and input resistance ( R i ) were determined using a 10 pA, 50 ms hyperpolarizing current step. To prevent excessive dialysis and to ensure successful detection of mRNAs in the recorded cells, the total duration of recordings did not exceed 10 min. Acquisition and analysis of electrophysiological data were performed using custom routines written in MATLAB (Mathworks), IgorPro (Wavemetrics), or AxoGraphX. Data are reported as median ± interquartile range, and statistical analysis was carried out using the Mann–Whitney or Fisher’s exact test, as indicated. Statistical significance was assumed at P < 0.05. To determine the presence of spikelets, peak detection was used to generate event-triggered average waveforms with thresholds based on the mean absolute deviation (MAD) of the raw trace. Spikelet recordings were scored for the presence of spikelets blind to the molecular identity of the cells. The analysis was restricted to cells recorded in the presence of synaptic blockers. MLIs were filled with 100 μM Alexa-594 via patch pipette to visualize their morphology using two-photon imaging. After completion of the electrophysiological recordings the patch electrode was retracted slowly and the cell resealed. We used a custom-built two-photon laser-scanning microscope with a 40×, 0.8 numerical aperture (NA) objective (Olympus Optical) and a pulsed two-photon laser (Chameleon or MIRA 900, Coherent, 800 nm excitation). DIC images were acquired at the end of each experiment and locations of each cell within the slice were recorded. Two-photon images were further processed in ImageJ. After recording and imaging, cerebellar slices were transferred to a well-plate and submerged in 2–4% PFA in PBS (pH 7.4) and incubated overnight at 4 °C. Slices were then washed in PBS (3 × 5 min) and then kept in 70% ethanol in RNase-free water until HCR was performed. Sequencing reads from mouse cerebellum experiments were demultiplexed and aligned to a mouse (mm10) premrna reference using CellRanger v3.0.2 with default settings. Digital gene expression matrices were generated with the CellRanger count function. Sequencing reads from human cerebellum experiments were demultiplexed and aligned to a human (hg19) premrna reference using the Drop-seq alignment workflow , which was also used to generate the downstream digital gene expression matrices. To estimate the probability of sufficiently sampling rare cell types in the cerebellum as a function of total number of nuclei sampled, we used the approach proposed by the Satija laboratory ( https://satijalab.org/howmanycells ), with the assumption of at most 10 very rare cell types, each with a prevalence of 0.15%. We derived this minimum based on the observed prevalences of the two rarest cell types we identified (OPC_4, Purkinje_Aldoc_2). We set 70 cells as the threshold for sufficient sampling, and calculated the overall probability as a negative binomial (NB) density: [12pt]{minimal} $${}{(k;n,p)}^{m}$$ NB ( k ; n , p ) m in which k = 70, P = 0.0015, m = 10, and n represents the total number of cells sampled. After generation of digital gene expression matrices as described above, we filtered out nuclei with fewer than 500 UMIs. We then performed cell type annotation iteratively through a number of rounds of dimensionality reduction, clustering, and removal of putative doublets and cells with high mitochondrial expression. For the preliminary clustering step, we performed standard preprocessing (UMI normalization, highly variable gene selection, scaling) with Seurat v2.3.4 as previously described . We used principal component analysis (PCA) with 30 components and Louvain community detection with resolution 0.1 to identify major clusters (resulting in 34 clusters). At this stage, we merged several clusters (primarily granule cell clusters) based on shared expression of canonical cell type markers, and removed one cluster whose top differentially expressed genes were mitochondrial (resulting in 11 clusters). For subsequent rounds of cluster annotation within these major cell type clusters, we applied a variation of the LIGER workflow previously described , using integrative non-negative matrix factorization (iNMF) to limit the effects of sample- and sex-specific gene expression. In brief, we normalized each cell by the number of UMIs, selected highly variable genes and spatially variable genes (see section below), performed iNMF, and clustered using Louvain community detection (omitting the quantile normalization step). Clusters whose top differentially expressed genes indicated contamination from a different cell type or high expression of mitochondrial genes were removed during the annotation process, and not included in subsequent rounds of annotation. This iterative annotation process was repeated until no contaminating clusters were identified in a round of clustering. Differential expression analysis within rounds of annotation was performed with the Wilcoxon rank sum test using Seurat’s FindAllMarkers function. Comprehensive differential expression analysis across all 46 final annotated clusters was performed using the wilcoxauc function from the presto package . A full set of parameters used in the LIGER annotation steps and further details can be found in Supplementary Table . For visualization as in Fig. , we merged all annotated high-quality nuclei and repeated preliminary preprocessing steps before performing UMAP using 25 principal components. After generation of digital gene expression matrices for the human nuclei profiles, we filtered out nuclei with fewer than 500 UMIs. We then performed a preliminary round of cell type annotation using the standard LIGER workflow (integrating across batches) to identify the primary human interneuron populations (UBCs, MLIs and PLIs, Golgi cells, granule cells; based on the same markers as in Supplementary Table ). We repeated an iteration of the same workflow for the four cell populations specified above (with an additional quantile normalization step) in order to identify and remove putative doublet and artefactual populations. Finally, we performed iNMF metagene projection as previously described to project the human datasets into latent spaces derived from the corresponding mouse cell type datasets. We then performed quantile normalization and Louvain clustering, assigning joint clusters based on the previously annotated mouse data clusters. For the granule cell joint analysis, we first limited the mouse data to include only the five cerebellar regions sampled in human data collection (lobules II, VII, VIII, IX and X). For the Golgi cell joint analysis, we performed iNMF (integrating across species), instead of metagene projection. To identify genes with high regional variance, we first computed the log of the index of dispersion (log variance-to-mean ratio, logVMR) for each gene, across each of the 16 lobular regions. Next, we simulated a Gaussian null distribution whose centre was the logVMR mode, found by performing a kernel density estimation of the logVMRs (using the density function in R, followed by the turnpoints function). The standard deviation of the Gaussian was computed by reflecting the values less than the mode across the centre. Genes whose logVMRs were in the upper tail with P < 0.01 (Benjamini–Hochberg adjusted) were ruled as spatially variable. For the granule cell and PC cluster analyses, adjusted P -value thresholds were set to 0.001 and 0.002, respectively. To determine whether the lobule composition of a cluster differs significantly from the corresponding outer level cell type lobule distribution, we used a multinomial test approximated by Pearson’s chi-squared test with k − 1 degrees of freedom, in which k was the total number of lobules sampled (16). The expected number of nuclei for a cluster i and lobule j was estimated as follows: [12pt]{minimal} $${E}_{ij}={N}_{i} _{j}}{{ }_{j}{N}_{j}}$$ E i j = N i × N j ∑ j N j where N i is the total number of nuclei in cluster i and N j is the number of nuclei in lobule j (across all clusters in the outer level cell type, as defined below). The resulting P values were FDR-adjusted (Benjamini–Hochberg) using the p.adjust function in R. Lobule enrichment (LE) scores for each cluster i and each lobule j were calculated by: [12pt]{minimal} $${{}}_{ij}=\,_{ij}}{{ }_{j}{n}_{ij}}}{_{j}}{{ }_{j}{N}_{j}}}$$ LE i j = n i j ∑ j n i j N j ∑ j N j in which n ij is the observed number of nuclei in cluster i and lobule j , and N j is the number of nuclei in lobule j (across all clusters in the outer level cell type). For this analysis, we used coarse cell type definitions shown coloured in the Fig. , and merged the PLI clusters. For lobule composition testing and replicate consistency analysis below, we downsampled granule cells to 60,000 nuclei (the next most numerous cell type were the MLI and PLI clusters with 45,767 nuclei). To determine the consistency of lobule enrichment scores across replicates in each region, we designated two sets of replicates by assigning nuclei from the most represented replicate in each region and cluster analysis to ‘replicate 1’ and nuclei from the second most represented replicate in each region to ‘replicate 2’. This assignment was used because not all regions had representation from all individuals profiled, and some had representation from only two individuals. We calculated lobule enrichment scores for each cluster using each of the replicate sets separately; we then calculated the Pearson correlation between the two sets of lobule enrichment scores for each cluster. We would expect correlation to be high for clusters when lobule enrichment is biologically consistent. We note that one cluster (Purkinje_Aldoc_2), was excluded from the replicate consistency analysis as under this design, it had representation from only a single aggregated replicate. However, we confirmed that lobule enrichment for this cluster was strongly consistent with Allen Brain Atlas expression staining (Extended Data Fig. ). To characterize molecular variation across cell types, we attempted to quantify the continuity of scaled gene expression across a given cell type pair, ordered by pseudotime rank (calculated using Monocle2). For each gene, we fit a logistic curve to the scaled gene expression values and calculated the maximum slope ( m ) of the resulting curve, after normalizing for both the number of cells and dynamic range of the logistic fit. To limit computational complexity, we downsampled cell type pairs to 5,000 total nuclei. We fit curves and computed m values for the most significantly differentially expressed genes across five cell type pairs (Fig. ). Differentially expressed genes were determined using Seurat’s FindMarkers function. We then plotted the cumulative distribution of m values for the top 200 genes for each cell type pair; genes were selected based on ordering by absolute Spearman correlation between scaled gene expression and pseudotime rank. After generation of digital gene expression matrices for the peri- and postnatal mouse profiles, we filtered out nuclei with fewer than 500 UMIs. We applied the LIGER workflow (similarly to the adult mouse data analysis), to identify clusters corresponding to major developmental pathways. We then isolated the cluster corresponding to GABAergic progenitors (marked by expression of Tfap2b and other canonical markers). We performed a second iteration of LIGER iNMF and Louvain clustering on this population and generated a UMAP representation. Using this UMAP representation, we calculated pseudotime ordering and a corresponding trajectory graph with Monocle3 . To identify modules of genes which varied along the computed trajectory, we used the graph_test and find_gene_modules functions from Monocle3. Further information on research design is available in the linked to this paper. Any methods, additional references, Nature Research reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at 10.1038/s41586-021-03220-z. Supplementary Information This file contains Supplementary Note for Extended Data Fig. 8, legends for Supplementary Tables 1-4, and Supplementary References. Reporting Summary Peer Review File Supplementary Tables This file contains Supplementary Tables 1-4 (see Supplementary Information for full legends). Supplementary Table 1: Established cell type markers. Supplementary Table 2: Highly differentially expressed genes across all clusters. Supplementary Table 3: Summary of electrophysiological data. Supplementary Table 4: Function parameters used during iterative cluster annotation.
Metabolomic and proteomic profiling of a burn-hemorrhagic shock swine model reveals a metabolomic signature associated with fatal outcomes
b28deaa1-c8c9-47b2-9247-d7bfa120dce5
11706163
Biochemistry[mh]
Trauma has become a leading cause of death for individuals under the age of 45, with a significant portion of these deaths attributed to traumatic hemorrhagic shock in China. From our first aid experiences, trauma is often accompanied by burns, especially in vicious deflagration incidents, such as the Kunshan and Tianjin Port explosion accident in 2014–2015 in China and explosions in modern battlefield as well. Traumatic hemorrhagic shock combined with severe burns, featured by concurrence of burn induced post-burn hypovolemia and post-traumatic blood loss, are much more critical and complicated conditions than burn or traumatic-shock alone . These severe combined conditions progress rapidly and present a particular challenge for treatment in both military and civilian populations. However, the diagnosis of injury severity is currently mainly subjective, lacking a clinically relevant quantitative measure. Furthermore, no efficient and special clinical preventions or treatments are available. During severe injuries, the combined injury of burn-hemorrhagic shock being a typical example, the body undergo dramatic changes in material and energy metabolism as well as in proteostasis. A more comprehensive understanding of the metabolomics and proteomic changes may open new avenues for diagnostic and treatment development. Metabolomics globally and quantitatively analyzes the body’s metabolites that are mostly small molecules (< 1500 Da) and can be used as physiological or pathological indicators for developing new diagnostic tests . Several studies have utilized metabolomics profiling to identify novel biomarkers associated with disease progression, mechanisms, and prognosis, . Proteomics systematically characterizes the protein expression and their interactions, offering complementary information to metabolomics on the functional changes in response to pathological events . Multi-omics data may provide valuable insights into the mechanisms and biological functions that may not be revealed in a single data set . In this study, we constructed a swine model of burn-hemorrhagic shock combined injury, as swine have physiological characteristics and metabolic processes that closely resemble those of humans, making them an ideal model for studying trauma-induced metabolic and proteomic disturbances. Using mass spectrometry-based metabolomics and proteomics techniques, we analyzed the metabolic and protein disturbances in serum after injury. The omics data revealed dramatically disordered metabolism and proteostasis, providing valuable insights into the physiological changes in response to the combined injury. Furthermore, we developed a multi-indicator panel for early diagnosis of injury severity, consisting of three metabolites—succinic acid, glutaric acid, and malic acid. These metabolites were selected based on their strong correlation with burn injury severity, ranking as the top three among all metabolites and proteins analyzed. Establishment of swine model of burn-hemorrhagic shock combined injury The experiments were conducted following the International Guiding Principles for Biomedical Research Involving Animals released by the CIOMS and received approval from the Institutional Animal Care and Use Committee at Chinese PLA General Hospital, with the ethics approval ID 2021KY033-KS001. The animal research adhered to the ARRIVE guidelines ( https://arriveguidelines.org ). Eight clean-grade adult male Landrace pigs bred by Beijing Vital Steps Biotechnology Co. Ltd, with production license numbers SCXK (Beijing) 2018-0011 and SYXK (Beijing) 2024-0027 were used. The pigs were 4–5 months and weighed between 60 and 70 kg. To ensure that the pigs were healthy and ready for the study, they were adaptively fed for 1 week after they were purchased. The temperature of animal room was maintained between 22 and 25° C and a circadian rhythm was of 12 h of light and 12 h of darkness. The pigs were given abrosia for 12 h and water for 6 h before the experiment, Anesthesia was administered intravenously using Zoletil 50 (0.1 ml/kg) before the operation. A deep vein double-lumen catheter was indwelled in the right internal jugular vein of each pig under the guidance of ultrasound, and a PICCO arterial catheter was indwelled in the left femoral artery under the guidance of ultrasound. After anesthesia, 400 g napalm were smeared on the back of the pigs and then ignited to burn for about 35 s to obtain a III° burn of 30% of the total area of skin in each pig. Then, 20% of the total blood volume was released from the femoral artery catheter at a uniform speed within 30 min after the burn, and 10% of the total blood volume was released at a constant speed in the next 5.5 h. The total blood volume was calculated according to 70 ml/kg. Serum samples were collected before injury and 2 h after injury, respectively, as pre-injury control group (eight cases) and post-injury 2-h group (eight cases). The monitor and intervention procedures for each pig continued for 6 h after injury, and pigs that survived at the end of 6 h were finally euthanized. Collection of serum samples Blood samples were collected at room temperature and then centrifuged to obtain serum. After the animal model was prepared, blood samples were collected 2 h before and after injury individually. Use a medical vacuum coagulation tube to draw blood samples in animal model, keep the sample at 4 °C for 30 min, centrifuge with 1000 g at 4 °C for 10 min, and then collect the serum supernatant. Each sample was collected and immediately stored at − 80 °C. Metabolomics LC–MS/MS analysis Quantitative determination of small molecule functional metabolites was performed using ultra-performance liquid chromatography–tandem mass spectrometry (UPLC–MS/MS) (ACQUITY UPLC-Xevo TQ-S, Waters Corp., Milford, MA, USA). Samples were separated by hydrophilic interaction liquid chromatography (HILIC) and analyzed by ACQUITY UPLC BEH C18 1.7 μM VanGuard pre-column (2.1 × 5 mm) and ACQUITY UPLC BEH C18 1.7 μM analytical column (2.1 × 100 mm) columns. All standards were purchased from Sigma-Aldrich (St. Louis, MO, USA), Steraloids Inc. (Newport, RI, USA) and TRC Chemicals (Toronto, ON, Canada). The standard substances were weighed accurately, dissolved in water, methanol, sodium hydroxide solution (Sigma-Aldrich, 795429) or hydrochloric acid solution (Sigma-Aldrich, 258148), and prepared as a concentration of 5.0 mg/mL stock solutions, respectively. A calibration solution was prepared and mixed with an appropriate amount of each standard sample. Formic acid (Mass Pure Grade, A117-50) was purchased from Sigma-Aldrich (St.Louis, MO, USA), methanol (Mass Pure Grade, A-456-4), acetonitrile (Mass Pure Grade, A955-4) and isopropanol (Mass Pure Grade, A461-4) were purchased from Thermo-Fisher Scientific (FairLawn, NJ, USA). Experimental ultrapure water was prepared for LC/MS from a Mill-Q reference ultrapure water system (Millipore, Billerica, MA, USA) equipped with a 0.22 μm filter. To avoid degradation of samples, they were thawed in an ice bath and 20 μL of blood samples were added to the 96-well plate, and then the plate was transferred to an Eppendorf epMotion workstation (Eppendorf Inc., Humburg, Germany). 120 μl of ice-water pre-cooled methanol solution (containing internal standard) was added and vortexed vigorously for 5 min. The plates were centrifuged (4000 g , 30 min) at 4 °C, and returned to the workstation. 20 μL of freshly prepared derivatization reagent was added to each well, the plate was sealed and placed at 30 °C for 60 min of derivatization. Furthermore, 330 μL ice-bathed 50% methanol solution was added to dilute the sample, and centrifuged at 4 °C (4000 g , 30 min), 135 μL supernatant was drawn and transferred to a new 96-well plate, which of 10μL was added each as internal standard. Add the derivatized standard stock solution to the left well for serial dilution, and finally seal the plate for LC–MS analysis. Processing of metabolomics data Raw data files were generated by UPLC–MS/MS and processed using MassLynx software (v4.1, Waters, Milford, MA, USA), and peaks were integrated, calibrated and quantified for each metabolite. The iMAP platform (v1.0, Metabo-Profile, Shanghai, China) was used for component analysis plotting and statistical analyses. Partial least squares–discriminant analysis (PLS–DA) and orthogonal partial least squares–discriminant analysis (OPLS–DA) were performed. Variable importance in projection (VIP) scores were obtained from the OPLS–DA model. Metabolites with VIP ≥ 1 and P < 0.05 (univariate analyses were performed based on the normality of the data) were considered statistically significant and identified as potential biomarkers. Processing of serum proteomics data Proteomic analysis method has been described previously and the same biological samples as metabolomics were used. All proteomic data analyzed for serum of burn-hemorrhagic shock combined injury swine model were obtained from the Proteome Xchange consortium ( http://proteomecentral.proteomexchange.org ) through the iProX partner repository, data set. The identifier is IPX0003225000. PLS–DA plots were generated using the R (version 4.4.2) package mixOmics (version 6.30.0). Integration of multi-omics data To integrate analysis of the metabolomics and proteomics data, the sparse generalized canonical correlation discriminant analysis was performed through the data integration analysis for Biomarker Discovery in the potential cOmponents (DIABLO) in the R package mixOmics . A generalized, supervised partial least squares approach was applied to integrate multiple data types on the same biological sample and jointly identify key omics signatures on multiple data sets. Normalized metabolomics and proteomics data were log-transformed by DAIBLO before integration. Specifically, we evaluated the correlation between the 33 differential proteins identified through proteomics and the top 10 metabolites from the metabolomics analysis. Pearson correlation coefficients were used to perform the correlation analysis. Statistical analysis A series of operations for data processing, interpretation and visualization were performed using the iMAP (v1.0; Metabo-Profile, Shanghai, China) platform. Two statistical analysis methods are widely used in metabolomic research: (1) multivariate statistical analysis, such as partial least squares–discriminant analysis (PLS–DA) and (2) univariate statistical analysis, including t test, Mann–Whitney–Wilcoxon ( U test), variance analysis, correlation analysis, etc. The best choice of statistical method usually depends on the data and project goals. The experiments were conducted following the International Guiding Principles for Biomedical Research Involving Animals released by the CIOMS and received approval from the Institutional Animal Care and Use Committee at Chinese PLA General Hospital, with the ethics approval ID 2021KY033-KS001. The animal research adhered to the ARRIVE guidelines ( https://arriveguidelines.org ). Eight clean-grade adult male Landrace pigs bred by Beijing Vital Steps Biotechnology Co. Ltd, with production license numbers SCXK (Beijing) 2018-0011 and SYXK (Beijing) 2024-0027 were used. The pigs were 4–5 months and weighed between 60 and 70 kg. To ensure that the pigs were healthy and ready for the study, they were adaptively fed for 1 week after they were purchased. The temperature of animal room was maintained between 22 and 25° C and a circadian rhythm was of 12 h of light and 12 h of darkness. The pigs were given abrosia for 12 h and water for 6 h before the experiment, Anesthesia was administered intravenously using Zoletil 50 (0.1 ml/kg) before the operation. A deep vein double-lumen catheter was indwelled in the right internal jugular vein of each pig under the guidance of ultrasound, and a PICCO arterial catheter was indwelled in the left femoral artery under the guidance of ultrasound. After anesthesia, 400 g napalm were smeared on the back of the pigs and then ignited to burn for about 35 s to obtain a III° burn of 30% of the total area of skin in each pig. Then, 20% of the total blood volume was released from the femoral artery catheter at a uniform speed within 30 min after the burn, and 10% of the total blood volume was released at a constant speed in the next 5.5 h. The total blood volume was calculated according to 70 ml/kg. Serum samples were collected before injury and 2 h after injury, respectively, as pre-injury control group (eight cases) and post-injury 2-h group (eight cases). The monitor and intervention procedures for each pig continued for 6 h after injury, and pigs that survived at the end of 6 h were finally euthanized. Blood samples were collected at room temperature and then centrifuged to obtain serum. After the animal model was prepared, blood samples were collected 2 h before and after injury individually. Use a medical vacuum coagulation tube to draw blood samples in animal model, keep the sample at 4 °C for 30 min, centrifuge with 1000 g at 4 °C for 10 min, and then collect the serum supernatant. Each sample was collected and immediately stored at − 80 °C. Quantitative determination of small molecule functional metabolites was performed using ultra-performance liquid chromatography–tandem mass spectrometry (UPLC–MS/MS) (ACQUITY UPLC-Xevo TQ-S, Waters Corp., Milford, MA, USA). Samples were separated by hydrophilic interaction liquid chromatography (HILIC) and analyzed by ACQUITY UPLC BEH C18 1.7 μM VanGuard pre-column (2.1 × 5 mm) and ACQUITY UPLC BEH C18 1.7 μM analytical column (2.1 × 100 mm) columns. All standards were purchased from Sigma-Aldrich (St. Louis, MO, USA), Steraloids Inc. (Newport, RI, USA) and TRC Chemicals (Toronto, ON, Canada). The standard substances were weighed accurately, dissolved in water, methanol, sodium hydroxide solution (Sigma-Aldrich, 795429) or hydrochloric acid solution (Sigma-Aldrich, 258148), and prepared as a concentration of 5.0 mg/mL stock solutions, respectively. A calibration solution was prepared and mixed with an appropriate amount of each standard sample. Formic acid (Mass Pure Grade, A117-50) was purchased from Sigma-Aldrich (St.Louis, MO, USA), methanol (Mass Pure Grade, A-456-4), acetonitrile (Mass Pure Grade, A955-4) and isopropanol (Mass Pure Grade, A461-4) were purchased from Thermo-Fisher Scientific (FairLawn, NJ, USA). Experimental ultrapure water was prepared for LC/MS from a Mill-Q reference ultrapure water system (Millipore, Billerica, MA, USA) equipped with a 0.22 μm filter. To avoid degradation of samples, they were thawed in an ice bath and 20 μL of blood samples were added to the 96-well plate, and then the plate was transferred to an Eppendorf epMotion workstation (Eppendorf Inc., Humburg, Germany). 120 μl of ice-water pre-cooled methanol solution (containing internal standard) was added and vortexed vigorously for 5 min. The plates were centrifuged (4000 g , 30 min) at 4 °C, and returned to the workstation. 20 μL of freshly prepared derivatization reagent was added to each well, the plate was sealed and placed at 30 °C for 60 min of derivatization. Furthermore, 330 μL ice-bathed 50% methanol solution was added to dilute the sample, and centrifuged at 4 °C (4000 g , 30 min), 135 μL supernatant was drawn and transferred to a new 96-well plate, which of 10μL was added each as internal standard. Add the derivatized standard stock solution to the left well for serial dilution, and finally seal the plate for LC–MS analysis. Raw data files were generated by UPLC–MS/MS and processed using MassLynx software (v4.1, Waters, Milford, MA, USA), and peaks were integrated, calibrated and quantified for each metabolite. The iMAP platform (v1.0, Metabo-Profile, Shanghai, China) was used for component analysis plotting and statistical analyses. Partial least squares–discriminant analysis (PLS–DA) and orthogonal partial least squares–discriminant analysis (OPLS–DA) were performed. Variable importance in projection (VIP) scores were obtained from the OPLS–DA model. Metabolites with VIP ≥ 1 and P < 0.05 (univariate analyses were performed based on the normality of the data) were considered statistically significant and identified as potential biomarkers. Proteomic analysis method has been described previously and the same biological samples as metabolomics were used. All proteomic data analyzed for serum of burn-hemorrhagic shock combined injury swine model were obtained from the Proteome Xchange consortium ( http://proteomecentral.proteomexchange.org ) through the iProX partner repository, data set. The identifier is IPX0003225000. PLS–DA plots were generated using the R (version 4.4.2) package mixOmics (version 6.30.0). To integrate analysis of the metabolomics and proteomics data, the sparse generalized canonical correlation discriminant analysis was performed through the data integration analysis for Biomarker Discovery in the potential cOmponents (DIABLO) in the R package mixOmics . A generalized, supervised partial least squares approach was applied to integrate multiple data types on the same biological sample and jointly identify key omics signatures on multiple data sets. Normalized metabolomics and proteomics data were log-transformed by DAIBLO before integration. Specifically, we evaluated the correlation between the 33 differential proteins identified through proteomics and the top 10 metabolites from the metabolomics analysis. Pearson correlation coefficients were used to perform the correlation analysis. A series of operations for data processing, interpretation and visualization were performed using the iMAP (v1.0; Metabo-Profile, Shanghai, China) platform. Two statistical analysis methods are widely used in metabolomic research: (1) multivariate statistical analysis, such as partial least squares–discriminant analysis (PLS–DA) and (2) univariate statistical analysis, including t test, Mann–Whitney–Wilcoxon ( U test), variance analysis, correlation analysis, etc. The best choice of statistical method usually depends on the data and project goals. Metabolomics and proteomics analyses were performed on serum from pigs after they were subjected to thermal injury for 45 s followed by hemorrhagic shock for 2 h, and results were compared with 0 h serum control. Raw data are reported in Tables and . Non-targeted metabolomics data indicated organic acids and amino acids are major serum metabolites in swine model with burn-hemorrhagic shock combined injury In the non-targeted metabolomics experiment, PLS–DA completely segregated the 2 h post-injury group(t3) and 0 h control group(t0) with component 1 explaining 32.1% of the variance and mostly describing the effects of the combined injury, and component 2 (11.2%) mostly explaining biological variability across samples (Fig. A). 194 metabolites were identified. The composition of average abundance of the metabolites in all samples is 51.88% organic acids, 36.22% amino acids, 6.01% fatty acids, 3.99% indoles and 1.89% others (Fig. B). Compared to 0 h serum control, relative abundance of organic acids increased from 36.89 to 58.75%, but amino acids decreased from 47.66 to 30.98% in the serum 2 h post-injury. Other components such as lipids including fatty acids and carnitines also decreased obviously (Fig. C). The result suggested organic acid, amino acids and lipids have remarkable alterations in burn-hemorrhagic shock swine model. Organic acids and amino acids are obviously upregulated post injury in swine model Univariate analysis showed that 118 metabolites were significantly altered after injury including 116 up-regulated and 2 down-regulated ( P < 0.05, log2fold change >  = 0) (Fig. A). Among them, the top nine most differential metabolites with the lowest P values include hydroxypropionic acid, lysine, tyrosine, 2-methylbutyroylcarnitine, leucine, proline, 2-hydroxybutyric acid, alanine and phenylalanine (Fig. B). Eighty-seven potential biomarkers were selected through the intersection of univariate and OPLS–DA analyses (VIP ≥ 1 and P < 0.05) (Fig. C). These potential biomarkers are illustrated in a heatmap (Fig. D) and a histogram (Fig. E). KEGG pathway analysis based on the 87 metabolites revealed that the pathways most affected by the injury were alanine, aspartate and glutamate metabolism, phenylalanine metabolism; aminoacyl-tRNA biosynthesis, valine, leucine and isoleucine biosynthesis and tricarboxylic acid (TCA) cycle ( P < 0.01) (Fig. F). Altered amino acid, glucose and fatty acid metabolism levels Early in the 1980s, several groups had found that amino acid metabolism altered extremely during burn injury. They found hyperaminoacidemia peculiarly for glycine, hydroxyproline, alanine, lysine, phenylalanine, and glutamine in the acute phase . While the serums which were collected 2 h post the burn-hemorrhagic shock combined injury, conforming to the acute phase, were used in metabolomic analysis. The results showed alanine, lysine and phenylalanine were increased significantly to 2.4, 1.7, and 1.5-folds, respectively, in the serum 2 h post-injury compared with control ( P < 0.001). Glutamine was increased 1.2-folds that of the control group ( P = 0.09). However, glycine and hydroxyproline, were not found significantly increased. A set of histidine-related amino acids and dipeptides increased: 1-methylhistidine (1MH) increased 1.12-folds ( P < 0.001) and histidine increased 1.3-folds ( P < 0.01), anserine (β-alanyl-3-methylhistidine) increased 1.46-folds ( P < 0.05) and carnosine (beta-alanyl-L-histidine) increased 1.45-folds ( P < 0.01). The metabolomic data also showed other amino acids increased significantly, as listed in Table . Furthermore, the injury group exhibited an increase in the metabolites of glycolysis, gluconeogenesis, and TCA pathways in the serum. Lactic acid and pyruvic acid were significantly increased in the serum to 3.44-folds ( P < 0.01) and 3.25-folds ( P < 0.01), respectively. Five out of eight TCA cycle intermediates were significantly higher in the injury serum with a 1.5~4.7-folds increment. Glucogenic amino acids such as alanine, threonine, asparagine, tyrosine, histidine, serine, proline and valine also increased significantly to furnish the liver with more raw material for gluconeogenesis (Table ). Levels of the vast majority of the detected fatty acid and acylcarnitines (ACs) species, ranging from short-chain, medium-chain and long-chain, increased significantly, while free L-carnitine concomitantly decreased upon burn-shock injury. Apart from above mentioned amino acid and fat alterations, organic acids such as oxoadipic acid and methylmalonic acid related to amino acid metabolism were also significantly higher. All detected bile acids increased significantly, no matter primary or secondary bile acid, conjugated or unconjugated bile acid (Table ). Proteomic analysis of serum from swine model of burn-hemorrhagic shock combined injury To establish a comprehensive view of burn-hemorrhagic shock combined injury, proteomics were performed based on the same serum samples. We identified totally 594 proteins after removing albumin, IgG, and other high-abundance proteins from the samples. PLS–DA revealed two distinct clusters, effectively discriminating between the serum proteomics of the 0 h control group and the 2 h post-injury group (Fig. A). Compared to the control, there were 33 differentially expressed proteins in the serum 2 h post-injury. Among them, 23 proteins were upregulated and 10 proteins were downregulated ( P < 0.05, |log2fold change|> = 0.5) (Fig. B). The importance of response to wounding, coagulation, homeostasis, body fluid levels regulation, wound healing, cell-substrate adhesion, and platelet activation are emphasized by the Biological Process (BP) analysis. The extracellular region, extracellular space, extracellular exosome, extracellular organelle, extracellular vesicle, and collagen-containing extracellular matrix are enriched by Cellular Component (CC) analysis. Structural molecule activity, extracellular matrix structural constituent and cell adhesion molecule binding are enriched by Molecular Function (MF) analysis (Fig. C). Platelet activation, Focal adhesion, Complement ECM–receptor interaction and PI3K–Akt signaling pathway are regulated by KEGG analysis (Fig. D). Matrix and RBC-related proteins are increased in the serum of burn-shock group Two isoforms for type I collagen (COL1A1, COL1A2) were identified and both showed remarkable elevations in the serum 2 h post-injury. COL1A2 increased the most dramatically among all the differential proteins reaching an 829.49-folds ( P = 0.02), and COL1A1 ranks the third (19.3-folds, P = 0.02). Collagen type III alpha-1 chain preproprotein (COL3A1) was identified and showed a 4.2-fold increase ( P = 0.02). Hemoglobin alpha and beta chains (HBA and HBB), the most abundant protein in RBC, displayed tenfold and 11-fold increment, ranking NO.6 and NO.5, respectively. Biliverdin reductase B (BLVRB) was upregulated as much as 12.5-fold ( P < 0.01) making the fourth strongest increment. Glutathione S-transferase P (GSTP1) was also identified and showed a 3.48-fold increase ( P < 0.01). Peroxiredoxin-2 (Fragment, PRDX2) showed a 1.9-fold ( P = 0.01) increment. Two isoforms for carbonic anhydrase (CA1, CA2) are increased 2.92-fold (P = 0.03) and 7.13-fold ( P < 0.001), respectively (Table ). Wounding, acute stress proteins and coagulation proteins are identified and altered Five proteins were identified in this category. Three isoforms of fibrinogen were detected as fibrinogen alpha, beta and gamma chains (FGA, FGB, FGG). Their quantitative rise hovered between 2 ~ threefold ( P < 0.05) in the serum 2 h post-injury. A stress associated protein, creatine kinase M-type (CKM) increased 2.05-fold ( P < 0.01). Serum coagulation factor XIII A chain (F13A1) was also identified in our samples and showed a twofolds decrease in the burn-shock group ( P < 0.01) (Table ). Integrated correlation analysis of multi-omics data Correlation analysis between injury-related metabolites and differential proteins identified that six metabolites were closely related to 13 differential proteins observed in the proteome which correlation > 0.7 (Fig. A). Type I collagen is strongly associated with malic acid, glutaric acid, methylmalonic acid, and succinic acid; hemoglobin is strongly associated with threonine and succinate. CA was strongly associated with pyridineacetic acid, aminoadipic acid, glucaric acid, phenylacetic acid, citrulline, ALA, and 2-hydroxybutyric acid (Fig. B). Therefore, in burn-hemorrhagic shock combined injury the potential components of the two omics data sets are highly correlated. Succinic acid, glutaric acid and malic acid levels are associated with injury severity and can predict fatal outcomes Injury scores were determined depending on the survival time of the pigs after injury to reflect injury severity. Three pigs out of eight were still alive 6 h post-injury and were assigned injury degree 1 (mild disease); two pigs died 4 h post-injury and were assigned degree 2 (moderate disease); Three pigs died 2 h post-injury and were assigned 3 (severe disease). Pearson correlation analysis was performed between the injury score and the elevated value of each differential metabolite and protein (serum value of the differential metabolite or protein at 2 h minus that at 0 h). Ten metabolites and three proteins were found to demonstrate significantly positive linear correlation with injury scores ( P < 0.05) (Table ). Succinic acid, glutaric acid and malic acid showed the highest Pearson correlation coefficients. We further performed biostatistical methods to analyze whether the elevated serum levels of succinic acid, glutaric acid and malic acid together, were linked to fatal outcomes. Using multiple linear regression analysis, we get an equation combining the three variants to predict the degree of injury severity. The model generated the coefficients to identify the injury severity degree (Table ). The regression line effectively determines the appropriate values for the intercept and slope, resulting in a line that best fits the given criteria. The prediction formula can be described as “Injury Severity = 0.998735 + 0.007335 × Δ(succinic acid) + 1.355171 × Δ(glutaric acid) + 0.01229 × Δ(malic acid)”. Δ(succinic acid), Δ(glutaric acid), and Δ(malic acid) represent the net differences in their concentrations, calculated as the post-injury levels minus the pre-injury levels. The intercept (0.998735) reflects the baseline injury severity when there is no measurable net differences in the metabolite levels. The coefficients indicate the magnitude of each metabolite's contribution to the predicted injury severity. The model, with a coefficient of determination ( R 2 ) of 0.8025, explains approximately 80.25% of the variability in injury severity, indicating that it is a robust predictor. Additional metrics, such as the mean absolute error (MAE), mean squared error (MSE), and root mean squared error (RMSE), further confirm the model’s predictive accuracy and robustness (Table ). Comparisons between actual and predicted values (Fig. A, B) demonstrate the model’s high performance in capturing the relationship between metabolite alterations and injury severity. In addition, we used Q–Q Residuals to show the reliability of our linear regression model (Fig. C). The distribution of the residues revealed the model’s excellent performance. Our results suggest a potential therapeutic strategy that would involve decreasing serum levels of above metabolites and proteins, especially succinic acid, glutaric acid and malic acid, during severe burn-hemorrhagic shock injuries. We also suggest that levels of succinic acid, glutaric acid and malic acid in the serum consist an indicator panel of disease severity and can predict mortality. In the non-targeted metabolomics experiment, PLS–DA completely segregated the 2 h post-injury group(t3) and 0 h control group(t0) with component 1 explaining 32.1% of the variance and mostly describing the effects of the combined injury, and component 2 (11.2%) mostly explaining biological variability across samples (Fig. A). 194 metabolites were identified. The composition of average abundance of the metabolites in all samples is 51.88% organic acids, 36.22% amino acids, 6.01% fatty acids, 3.99% indoles and 1.89% others (Fig. B). Compared to 0 h serum control, relative abundance of organic acids increased from 36.89 to 58.75%, but amino acids decreased from 47.66 to 30.98% in the serum 2 h post-injury. Other components such as lipids including fatty acids and carnitines also decreased obviously (Fig. C). The result suggested organic acid, amino acids and lipids have remarkable alterations in burn-hemorrhagic shock swine model. Organic acids and amino acids are obviously upregulated post injury in swine model Univariate analysis showed that 118 metabolites were significantly altered after injury including 116 up-regulated and 2 down-regulated ( P < 0.05, log2fold change >  = 0) (Fig. A). Among them, the top nine most differential metabolites with the lowest P values include hydroxypropionic acid, lysine, tyrosine, 2-methylbutyroylcarnitine, leucine, proline, 2-hydroxybutyric acid, alanine and phenylalanine (Fig. B). Eighty-seven potential biomarkers were selected through the intersection of univariate and OPLS–DA analyses (VIP ≥ 1 and P < 0.05) (Fig. C). These potential biomarkers are illustrated in a heatmap (Fig. D) and a histogram (Fig. E). KEGG pathway analysis based on the 87 metabolites revealed that the pathways most affected by the injury were alanine, aspartate and glutamate metabolism, phenylalanine metabolism; aminoacyl-tRNA biosynthesis, valine, leucine and isoleucine biosynthesis and tricarboxylic acid (TCA) cycle ( P < 0.01) (Fig. F). Altered amino acid, glucose and fatty acid metabolism levels Early in the 1980s, several groups had found that amino acid metabolism altered extremely during burn injury. They found hyperaminoacidemia peculiarly for glycine, hydroxyproline, alanine, lysine, phenylalanine, and glutamine in the acute phase . While the serums which were collected 2 h post the burn-hemorrhagic shock combined injury, conforming to the acute phase, were used in metabolomic analysis. The results showed alanine, lysine and phenylalanine were increased significantly to 2.4, 1.7, and 1.5-folds, respectively, in the serum 2 h post-injury compared with control ( P < 0.001). Glutamine was increased 1.2-folds that of the control group ( P = 0.09). However, glycine and hydroxyproline, were not found significantly increased. A set of histidine-related amino acids and dipeptides increased: 1-methylhistidine (1MH) increased 1.12-folds ( P < 0.001) and histidine increased 1.3-folds ( P < 0.01), anserine (β-alanyl-3-methylhistidine) increased 1.46-folds ( P < 0.05) and carnosine (beta-alanyl-L-histidine) increased 1.45-folds ( P < 0.01). The metabolomic data also showed other amino acids increased significantly, as listed in Table . Furthermore, the injury group exhibited an increase in the metabolites of glycolysis, gluconeogenesis, and TCA pathways in the serum. Lactic acid and pyruvic acid were significantly increased in the serum to 3.44-folds ( P < 0.01) and 3.25-folds ( P < 0.01), respectively. Five out of eight TCA cycle intermediates were significantly higher in the injury serum with a 1.5~4.7-folds increment. Glucogenic amino acids such as alanine, threonine, asparagine, tyrosine, histidine, serine, proline and valine also increased significantly to furnish the liver with more raw material for gluconeogenesis (Table ). Levels of the vast majority of the detected fatty acid and acylcarnitines (ACs) species, ranging from short-chain, medium-chain and long-chain, increased significantly, while free L-carnitine concomitantly decreased upon burn-shock injury. Apart from above mentioned amino acid and fat alterations, organic acids such as oxoadipic acid and methylmalonic acid related to amino acid metabolism were also significantly higher. All detected bile acids increased significantly, no matter primary or secondary bile acid, conjugated or unconjugated bile acid (Table ). Univariate analysis showed that 118 metabolites were significantly altered after injury including 116 up-regulated and 2 down-regulated ( P < 0.05, log2fold change >  = 0) (Fig. A). Among them, the top nine most differential metabolites with the lowest P values include hydroxypropionic acid, lysine, tyrosine, 2-methylbutyroylcarnitine, leucine, proline, 2-hydroxybutyric acid, alanine and phenylalanine (Fig. B). Eighty-seven potential biomarkers were selected through the intersection of univariate and OPLS–DA analyses (VIP ≥ 1 and P < 0.05) (Fig. C). These potential biomarkers are illustrated in a heatmap (Fig. D) and a histogram (Fig. E). KEGG pathway analysis based on the 87 metabolites revealed that the pathways most affected by the injury were alanine, aspartate and glutamate metabolism, phenylalanine metabolism; aminoacyl-tRNA biosynthesis, valine, leucine and isoleucine biosynthesis and tricarboxylic acid (TCA) cycle ( P < 0.01) (Fig. F). Early in the 1980s, several groups had found that amino acid metabolism altered extremely during burn injury. They found hyperaminoacidemia peculiarly for glycine, hydroxyproline, alanine, lysine, phenylalanine, and glutamine in the acute phase . While the serums which were collected 2 h post the burn-hemorrhagic shock combined injury, conforming to the acute phase, were used in metabolomic analysis. The results showed alanine, lysine and phenylalanine were increased significantly to 2.4, 1.7, and 1.5-folds, respectively, in the serum 2 h post-injury compared with control ( P < 0.001). Glutamine was increased 1.2-folds that of the control group ( P = 0.09). However, glycine and hydroxyproline, were not found significantly increased. A set of histidine-related amino acids and dipeptides increased: 1-methylhistidine (1MH) increased 1.12-folds ( P < 0.001) and histidine increased 1.3-folds ( P < 0.01), anserine (β-alanyl-3-methylhistidine) increased 1.46-folds ( P < 0.05) and carnosine (beta-alanyl-L-histidine) increased 1.45-folds ( P < 0.01). The metabolomic data also showed other amino acids increased significantly, as listed in Table . Furthermore, the injury group exhibited an increase in the metabolites of glycolysis, gluconeogenesis, and TCA pathways in the serum. Lactic acid and pyruvic acid were significantly increased in the serum to 3.44-folds ( P < 0.01) and 3.25-folds ( P < 0.01), respectively. Five out of eight TCA cycle intermediates were significantly higher in the injury serum with a 1.5~4.7-folds increment. Glucogenic amino acids such as alanine, threonine, asparagine, tyrosine, histidine, serine, proline and valine also increased significantly to furnish the liver with more raw material for gluconeogenesis (Table ). Levels of the vast majority of the detected fatty acid and acylcarnitines (ACs) species, ranging from short-chain, medium-chain and long-chain, increased significantly, while free L-carnitine concomitantly decreased upon burn-shock injury. Apart from above mentioned amino acid and fat alterations, organic acids such as oxoadipic acid and methylmalonic acid related to amino acid metabolism were also significantly higher. All detected bile acids increased significantly, no matter primary or secondary bile acid, conjugated or unconjugated bile acid (Table ). To establish a comprehensive view of burn-hemorrhagic shock combined injury, proteomics were performed based on the same serum samples. We identified totally 594 proteins after removing albumin, IgG, and other high-abundance proteins from the samples. PLS–DA revealed two distinct clusters, effectively discriminating between the serum proteomics of the 0 h control group and the 2 h post-injury group (Fig. A). Compared to the control, there were 33 differentially expressed proteins in the serum 2 h post-injury. Among them, 23 proteins were upregulated and 10 proteins were downregulated ( P < 0.05, |log2fold change|> = 0.5) (Fig. B). The importance of response to wounding, coagulation, homeostasis, body fluid levels regulation, wound healing, cell-substrate adhesion, and platelet activation are emphasized by the Biological Process (BP) analysis. The extracellular region, extracellular space, extracellular exosome, extracellular organelle, extracellular vesicle, and collagen-containing extracellular matrix are enriched by Cellular Component (CC) analysis. Structural molecule activity, extracellular matrix structural constituent and cell adhesion molecule binding are enriched by Molecular Function (MF) analysis (Fig. C). Platelet activation, Focal adhesion, Complement ECM–receptor interaction and PI3K–Akt signaling pathway are regulated by KEGG analysis (Fig. D). Matrix and RBC-related proteins are increased in the serum of burn-shock group Two isoforms for type I collagen (COL1A1, COL1A2) were identified and both showed remarkable elevations in the serum 2 h post-injury. COL1A2 increased the most dramatically among all the differential proteins reaching an 829.49-folds ( P = 0.02), and COL1A1 ranks the third (19.3-folds, P = 0.02). Collagen type III alpha-1 chain preproprotein (COL3A1) was identified and showed a 4.2-fold increase ( P = 0.02). Hemoglobin alpha and beta chains (HBA and HBB), the most abundant protein in RBC, displayed tenfold and 11-fold increment, ranking NO.6 and NO.5, respectively. Biliverdin reductase B (BLVRB) was upregulated as much as 12.5-fold ( P < 0.01) making the fourth strongest increment. Glutathione S-transferase P (GSTP1) was also identified and showed a 3.48-fold increase ( P < 0.01). Peroxiredoxin-2 (Fragment, PRDX2) showed a 1.9-fold ( P = 0.01) increment. Two isoforms for carbonic anhydrase (CA1, CA2) are increased 2.92-fold (P = 0.03) and 7.13-fold ( P < 0.001), respectively (Table ). Wounding, acute stress proteins and coagulation proteins are identified and altered Five proteins were identified in this category. Three isoforms of fibrinogen were detected as fibrinogen alpha, beta and gamma chains (FGA, FGB, FGG). Their quantitative rise hovered between 2 ~ threefold ( P < 0.05) in the serum 2 h post-injury. A stress associated protein, creatine kinase M-type (CKM) increased 2.05-fold ( P < 0.01). Serum coagulation factor XIII A chain (F13A1) was also identified in our samples and showed a twofolds decrease in the burn-shock group ( P < 0.01) (Table ). Two isoforms for type I collagen (COL1A1, COL1A2) were identified and both showed remarkable elevations in the serum 2 h post-injury. COL1A2 increased the most dramatically among all the differential proteins reaching an 829.49-folds ( P = 0.02), and COL1A1 ranks the third (19.3-folds, P = 0.02). Collagen type III alpha-1 chain preproprotein (COL3A1) was identified and showed a 4.2-fold increase ( P = 0.02). Hemoglobin alpha and beta chains (HBA and HBB), the most abundant protein in RBC, displayed tenfold and 11-fold increment, ranking NO.6 and NO.5, respectively. Biliverdin reductase B (BLVRB) was upregulated as much as 12.5-fold ( P < 0.01) making the fourth strongest increment. Glutathione S-transferase P (GSTP1) was also identified and showed a 3.48-fold increase ( P < 0.01). Peroxiredoxin-2 (Fragment, PRDX2) showed a 1.9-fold ( P = 0.01) increment. Two isoforms for carbonic anhydrase (CA1, CA2) are increased 2.92-fold (P = 0.03) and 7.13-fold ( P < 0.001), respectively (Table ). Five proteins were identified in this category. Three isoforms of fibrinogen were detected as fibrinogen alpha, beta and gamma chains (FGA, FGB, FGG). Their quantitative rise hovered between 2 ~ threefold ( P < 0.05) in the serum 2 h post-injury. A stress associated protein, creatine kinase M-type (CKM) increased 2.05-fold ( P < 0.01). Serum coagulation factor XIII A chain (F13A1) was also identified in our samples and showed a twofolds decrease in the burn-shock group ( P < 0.01) (Table ). Correlation analysis between injury-related metabolites and differential proteins identified that six metabolites were closely related to 13 differential proteins observed in the proteome which correlation > 0.7 (Fig. A). Type I collagen is strongly associated with malic acid, glutaric acid, methylmalonic acid, and succinic acid; hemoglobin is strongly associated with threonine and succinate. CA was strongly associated with pyridineacetic acid, aminoadipic acid, glucaric acid, phenylacetic acid, citrulline, ALA, and 2-hydroxybutyric acid (Fig. B). Therefore, in burn-hemorrhagic shock combined injury the potential components of the two omics data sets are highly correlated. Injury scores were determined depending on the survival time of the pigs after injury to reflect injury severity. Three pigs out of eight were still alive 6 h post-injury and were assigned injury degree 1 (mild disease); two pigs died 4 h post-injury and were assigned degree 2 (moderate disease); Three pigs died 2 h post-injury and were assigned 3 (severe disease). Pearson correlation analysis was performed between the injury score and the elevated value of each differential metabolite and protein (serum value of the differential metabolite or protein at 2 h minus that at 0 h). Ten metabolites and three proteins were found to demonstrate significantly positive linear correlation with injury scores ( P < 0.05) (Table ). Succinic acid, glutaric acid and malic acid showed the highest Pearson correlation coefficients. We further performed biostatistical methods to analyze whether the elevated serum levels of succinic acid, glutaric acid and malic acid together, were linked to fatal outcomes. Using multiple linear regression analysis, we get an equation combining the three variants to predict the degree of injury severity. The model generated the coefficients to identify the injury severity degree (Table ). The regression line effectively determines the appropriate values for the intercept and slope, resulting in a line that best fits the given criteria. The prediction formula can be described as “Injury Severity = 0.998735 + 0.007335 × Δ(succinic acid) + 1.355171 × Δ(glutaric acid) + 0.01229 × Δ(malic acid)”. Δ(succinic acid), Δ(glutaric acid), and Δ(malic acid) represent the net differences in their concentrations, calculated as the post-injury levels minus the pre-injury levels. The intercept (0.998735) reflects the baseline injury severity when there is no measurable net differences in the metabolite levels. The coefficients indicate the magnitude of each metabolite's contribution to the predicted injury severity. The model, with a coefficient of determination ( R 2 ) of 0.8025, explains approximately 80.25% of the variability in injury severity, indicating that it is a robust predictor. Additional metrics, such as the mean absolute error (MAE), mean squared error (MSE), and root mean squared error (RMSE), further confirm the model’s predictive accuracy and robustness (Table ). Comparisons between actual and predicted values (Fig. A, B) demonstrate the model’s high performance in capturing the relationship between metabolite alterations and injury severity. In addition, we used Q–Q Residuals to show the reliability of our linear regression model (Fig. C). The distribution of the residues revealed the model’s excellent performance. Our results suggest a potential therapeutic strategy that would involve decreasing serum levels of above metabolites and proteins, especially succinic acid, glutaric acid and malic acid, during severe burn-hemorrhagic shock injuries. We also suggest that levels of succinic acid, glutaric acid and malic acid in the serum consist an indicator panel of disease severity and can predict mortality. Burn and hemorrhage shock patients often exhibit severe hemodynamic disturbance, hypermetabolism, oxidative stress and stimulation of catabolic hormone. To globally examine the molecular and biochemical changes in the body, we constructed a swine model of burn-hemorrhagic shock combined injury and examined the changes in global metabolic and proteomic profiles in serum following injury. Despite the preliminary studies in burn or shock alone, to the best of our knowledge, no metabolomics and proteomics studies of the burn-hemorrhagic shock combined injury has been published to date. Swine models are selected, because they are better than rats in burn and shock studies, showing much more related metabolomic changes with human during injury. A series of metabolic and proteomic biomarkers are discovered for burn-hemorrhagic shock combined injury; furthermore, a total of 13 important differential metabolites and proteins are found to be correlated with injury degree, which can be used as biomarkers to evaluate the severity of injuries, and provide a strategy for early diagnosis and intervention of patients who may deteriorate. Altered amino acid, glucose, fatty acid metabolism and related potential treatments Severe burn and shock trauma causes profound stress response early after the trauma. During stress, glucocorticoid and catecholamines are rapidly and profoundly increased in the circulating blood, which exert significant effects on whole-body protein, glucose and lipid metabolism, characterized by hyperglycemia (increased glycogenolysis and gluconeogenesis, and decreased glycogenesis) and hypercatabolic state with loss of collagen and muscle mass leading to profound skeletal muscle wasting shown in previous studies . We found similar but more comprehensive and extensive changes in the metabolomic data in the combined injury. Decreased mitochondria oxidation and CoQ10 Long before the utilization of mass spectrum in metabolites analysis, deuterated and carbon-labeled stable isotopes of glucose has been used to quantify glucose metabolism in burned adults. These studies have shown the percentage of glucose cleared into tissue that was fully oxidized to CO2 was lower, suggesting a deficit in glucose oxidation in tissue of burned patients . Galster et al. also reported that muscle lipid oxidation following burns is depressed . Many studies have tried to found what makes the severely burned patient unable to oxidize glucose and lipid efficiently as energy sources. Early in 2007, Cree et al. found that mitochondrial oxidation of both glucose and palmitate in tissue of burn patients were reduced to only about half of controls . Providing additional powerful evidence, in our metabolome data, we detected significant accumulation of pyruvic acid, kinds of FFAs and acyl-carnitines in the serum 2 h post injury. Glucose and fat oxidation pathways converge at acetyl CoA that finally enter a common TCA cycle, suggesting a dysfunction in this common process in burn . Furthermore, five out of the eight TCA intermediates, that is, citric acid, oxoglutaric acid, succinic acid, fumaric acid, and malic acid, were found significantly accumulated in the serum 2 h post-injury, suggesting the TCA cycle are somehow severely blocked. TCA cycle is followed by electron transport chain (ETC) to transfer the electrons from NADH to O2 in mitochondria innermembrane. The three heavily regulated key enzymes in TCA cycle, citric acid synthase, isocitric acid dehydrogenase and α-ketoglutarate dehydrogenase can be inhibited by high NADH/NAD + ratio. Therefore, electron transfer rate is critical for TCA cycle turn over. In cases of ETC complex damage or serious hypoxia, electron of NADH cannot be transferred to O 2 to produce H 2 O, which leads to the blockade of ETC and subsequent increase in the ratio of NADH/NAD + . TCA cycle is then blocked. Integrating all these information, we proposed that the increased glucose, fat and TCA intermediates are due to a block in the ETC. In accordance with our hypothesis, succinate accumulation has been associated with increased mortality following hemorrhagic shock (HS) in military and civilian populations. One explanation is that HS cause oxygen scarcity in tissue, which subsequently exacerbate mitochondrial uncoupling . Another explanation relies on the excess glutaminolysis that provide a rate-limiting substrate for the synthesis of hemorrhagic succinate in RBC . However, the suppression of glutaminolysis by glutaminase inhibitor unexpectedly exacerbated the HS rat, suggesting that increased succinate is not from glutamine or glutaminolysis . We then propose that the reversal of the blocked ETC–TCA can reduce the increased glucose, fat and TCA intermediates and improve the prognosis of severe burns-hemorrhagic shock injury. Coenzyme Q10 (CoQ10) can enhance the function of ETC and promote oxidative phosphorylation reactions , protecting the integrity of cell membranes and mitochondrial membranes. Previous studies reported that CoQ10 supplementation could effectively prevent mitochondrial dysfunction and insulin resistance of skeletal muscle caused by burns . PPAR-α agonists is another candidate, since it can increase mitochondrial function and fatty acid oxidation by increasing citric acid synthase activity and up-regulating expression of many genes for proteins involved in substrate oxidation, including enzymes involved in the TCA cycle and respiratory chain . Decreased glycolysis and increased gluconeogenesis In our study, we detected 3.25-fold increment for pyruvic acid. Pyruvic acid is a core metabolite in glucose metabolism. Totally, pyruvic acid has four fates, converting to acetyl CoA, lactate, alanine and oxalacetate, respectively. Traumatic stress induced glucocorticoid can inhibit the oxidation of pyruvic acid to form acetyl CoA. Together with the blocked TCA cycle, the levels of pyruvic acid and the other products of pyruvic acid, such as lactate and alanine, increased in our study. However, oxalacetate was not detected in neither the 0 h nor the 2 h post-injury serum. One possible reason maybe that the increased NADH/NAD + ratio will promote malic acid formation from oxalacetate. The other reason is that oxalacetate can be converted to phosphoethanopyruvic acid to form glucose. During the burn-hemorrhagic shock stress, the gluconeogenesis pathway increased under regulation of highly increased glucocorticoid; therefore, oxalacetate level decreased. As the increased alanine would inhibit pyruvate kinase, and increased citric acid can inhibit phosphfructokinase-1, the glycolysis level was supposed to be greatly inhibited in the burn-hemorrhagic shock combined injury. Increased lipolysis, decreased carnitine transportation and L-carnitine Stress hormones, such as epinephrine, are potent activators of adipose tissue lipolysis, and could mobilize lipid in fat tissue to release more free fatty acids into blood after the severe trauma. FFA uptaken into cells will be first changed to acyl CoA in cytosol, and then transported into mitochondria via the mitochondrial carnitine/acylcarnitine shuttle transport system, and β-oxidized to acetyl CoA. We detected significantly increased medium chain fatty acids and short-, medium- and long-chain acyl-carnitines, together with an exhausted free carnitine pool. It has been proposed that acute elevations in lipolysis responding to adrenergic stress may increase energy availability accomplishing an adaptive response . However, our data suggest that the acetyl CoA cannot enter TCA efficiently because of the TCA–ETC blockade. Then, the acyl-carnitines accumulated because of the inability to cycle and release free carnitine. Taken together, these data offer a convincing explanation as to why glucose and lipid taken up by tissue is not fully oxidized by mitochondria. In the injured group, significantly increased levels of succinic acid, oxoglutaric acid, as well as lysine are involved in carnitine synthesis pathway and are in agreement with the increased demand for L-carnitine synthesis, suggesting that L-carnitine supplementation can potentially maintain and restore normal metabolic levels. Downregulation of L-carnitine in serum has been reported to inhibit the activity of pyruvate dehydrogenase, causing the accumulation of pyruvic acid . In addition, it also indirectly affects amino acid metabolism . L-carnitine is now used clinically to alleviate hemorrhagic shock, reduce ischemia and hypoxia. It has also been reported that L-carnitine supplementation can reduce cellular and mitochondrial damage in the liver by maintaining CPT1 enzyme activity . Yet, it is important to note that speeding up of the TCA–ETC is indispensable to avoid the supplemented L-carnitine being trapped in the acyl-carnitine step. Insulin resistance Previous studies showed that elevated concentrations of short- and long-chain acylcarnitines, which are also detected in our study (Table ), have been linked to insulin resistance. Together with above data of decreased glycolysis, increased gluconeogenesis, our results indicate the development of insulin resistance, providing a much detailed and comprehensive glucose and fat metabolite profile. Protein alteration and related potential treatments Increased matrix protein Collagen is the most abundant protein in mammals, approximately accounting for one-third of the whole-body protein. It has been reported early in the 1960’s that the administration of cortisol to rats resulted in induction of collagenolytic and proteolytic activities in the extracellular compartment of the skin with marked and abrupt loss of cutaneous collagen . Thermal injury also destructed collagen into amino acids . Correspondingly, our proteomic examination found the type I collagen increased significantly in the serum 2 h post-injury compared with control and correlate with the degree of injury severity. Notably, the true identity of the “collagen” shown in the proteome may be not the complete length protein, but is the peptides produced from collagen breakdown during connective tissue destruction, and further mapped to collagen during LS–MS analysis. Glycine and hydroxyproline present in large amount in collagen; however, they were not found significantly increased in our metabolome data. These results suggest that the collagens may be degraded not only into amino acids as reported early , but also into peptide segments, especially during the acute phase within 2 h post-injury. Altered RBC-related proteins Recent studies have estimated that the adult human body contains 25 trillion circulating RBCs accounting for ~ 83% of all host cells. This makes RBCs a type of circulating organ critical for human health and the damage and hemolysis is supposed to play important roles in both physiological and pathological situations . In our injury model. We found important signs of hemolysis, which may influence the prognosis. Oxidative response is crucial for the maintenance of normal function and integrity of RBCs . Several diseases such as favism with deficiency of GADH activity have underlined their importance. Reactive oxygen species (ROS) can cause RBCs damage, but a diversity of antioxidant systems is known to protect and repair RBCs. The high-capacity redox systems in RBC also scavenge extracellular radicals and thus protects against radicals formed in the body as a whole .Under normal steady-state conditions, the very solid protection systems against ROS can cope with the threat. However, during the extensive oxidative stress induced by the severe burn and shock trauma, abundant ROS are released and significantly reduce the antioxidative capacity of RBCs and damage them. Massive hemolysis is subsequently induced releasing appreciable amounts of cytosol contents into the circulation. Each mature RBC contains ~ 250–270 million copies of hemoglobin, accounting for ~ 98% of the cytosolic proteome . This is in accordance with our proteomic data of presence of large amounts of free hemoglobin in blood in the combined injury. Other proteins that present abundantly in RBCs, such as BLVRB, glutathione transferase, peroxiredoxin, and carbonic anhydrase (CA) were also significantly increased 2 h post-injury. Antioxidant proteins are expressed in high abundance in mature RBCs to help regulate cellular redox. BLVRB is a general NADPH-dependent flavin reductase (FR) that reduces numerous substrates, and plays a critical role in regulating cellular redox. Wu et al. found BLVRB mRNA levels increased during erythropoiesis and Paukovich et al . detected high levels of BLVRB in mature RBCs through mass spectrometry analysis, confirming the importance of BLVRB in redox regulation . Glutathione transferase P1-1 (GSTP1-1) is abundant in mammalian RBCs and is exclusively expressed during erythropoiesis. GSTP1-1 detoxify a large variety of toxic compounds using glutathione or by acting as a ligandin, and is also involved in the oxidative stress. GSTP1-1 has been found overexpressed in the human erythrocyte in the case of increased blood toxicity, such as in healthy subjects living in polluted areas and is likely a defense response to increased blood toxicity . Peroxiredoxin 2 (Prdx2) is the third most abundant protein present in RBCs and acts as a quick sink of peroxides by reacting with them to generate a disulfide-linked dimer . By the correlation analysis, CA1 increment could predict the degree of injury severity suggesting a relationship between hemolysis and severity of injury. Similarly, previous researches have shown that free hemoglobin in burn blister fluid reflected burn severity . Taken together, the increases in RBC-related proteins in the serum 2 h post-injury support an important role of hemolysis in the progress of the combined injury. Multivariate linear regression model predicting fatal outcomes For historical reasons such as the lack of a sufficiently sized cohort for standard statistical analysis, there is currently no clinical biomarker to predict fatal outcome for lethal burn-hemorrhagic shock combined injury, both in military and civilian populations. We constructed a swine model for burn-hemorrhagic shock combined injury and found that the markedly elevated succinic acid, glutaric acid, and malic acid are closely associated with injury severity, and can efficiently discriminated between mild, moderate and severe disease. Succinic acid, glutaric acid, and malic acid exist in all nucleated cells and they are indeed a likely biomarker that can be used to predict fatal outcome in lethal injury. Severe burn and shock trauma causes profound stress response early after the trauma. During stress, glucocorticoid and catecholamines are rapidly and profoundly increased in the circulating blood, which exert significant effects on whole-body protein, glucose and lipid metabolism, characterized by hyperglycemia (increased glycogenolysis and gluconeogenesis, and decreased glycogenesis) and hypercatabolic state with loss of collagen and muscle mass leading to profound skeletal muscle wasting shown in previous studies . We found similar but more comprehensive and extensive changes in the metabolomic data in the combined injury. Long before the utilization of mass spectrum in metabolites analysis, deuterated and carbon-labeled stable isotopes of glucose has been used to quantify glucose metabolism in burned adults. These studies have shown the percentage of glucose cleared into tissue that was fully oxidized to CO2 was lower, suggesting a deficit in glucose oxidation in tissue of burned patients . Galster et al. also reported that muscle lipid oxidation following burns is depressed . Many studies have tried to found what makes the severely burned patient unable to oxidize glucose and lipid efficiently as energy sources. Early in 2007, Cree et al. found that mitochondrial oxidation of both glucose and palmitate in tissue of burn patients were reduced to only about half of controls . Providing additional powerful evidence, in our metabolome data, we detected significant accumulation of pyruvic acid, kinds of FFAs and acyl-carnitines in the serum 2 h post injury. Glucose and fat oxidation pathways converge at acetyl CoA that finally enter a common TCA cycle, suggesting a dysfunction in this common process in burn . Furthermore, five out of the eight TCA intermediates, that is, citric acid, oxoglutaric acid, succinic acid, fumaric acid, and malic acid, were found significantly accumulated in the serum 2 h post-injury, suggesting the TCA cycle are somehow severely blocked. TCA cycle is followed by electron transport chain (ETC) to transfer the electrons from NADH to O2 in mitochondria innermembrane. The three heavily regulated key enzymes in TCA cycle, citric acid synthase, isocitric acid dehydrogenase and α-ketoglutarate dehydrogenase can be inhibited by high NADH/NAD + ratio. Therefore, electron transfer rate is critical for TCA cycle turn over. In cases of ETC complex damage or serious hypoxia, electron of NADH cannot be transferred to O 2 to produce H 2 O, which leads to the blockade of ETC and subsequent increase in the ratio of NADH/NAD + . TCA cycle is then blocked. Integrating all these information, we proposed that the increased glucose, fat and TCA intermediates are due to a block in the ETC. In accordance with our hypothesis, succinate accumulation has been associated with increased mortality following hemorrhagic shock (HS) in military and civilian populations. One explanation is that HS cause oxygen scarcity in tissue, which subsequently exacerbate mitochondrial uncoupling . Another explanation relies on the excess glutaminolysis that provide a rate-limiting substrate for the synthesis of hemorrhagic succinate in RBC . However, the suppression of glutaminolysis by glutaminase inhibitor unexpectedly exacerbated the HS rat, suggesting that increased succinate is not from glutamine or glutaminolysis . We then propose that the reversal of the blocked ETC–TCA can reduce the increased glucose, fat and TCA intermediates and improve the prognosis of severe burns-hemorrhagic shock injury. Coenzyme Q10 (CoQ10) can enhance the function of ETC and promote oxidative phosphorylation reactions , protecting the integrity of cell membranes and mitochondrial membranes. Previous studies reported that CoQ10 supplementation could effectively prevent mitochondrial dysfunction and insulin resistance of skeletal muscle caused by burns . PPAR-α agonists is another candidate, since it can increase mitochondrial function and fatty acid oxidation by increasing citric acid synthase activity and up-regulating expression of many genes for proteins involved in substrate oxidation, including enzymes involved in the TCA cycle and respiratory chain . In our study, we detected 3.25-fold increment for pyruvic acid. Pyruvic acid is a core metabolite in glucose metabolism. Totally, pyruvic acid has four fates, converting to acetyl CoA, lactate, alanine and oxalacetate, respectively. Traumatic stress induced glucocorticoid can inhibit the oxidation of pyruvic acid to form acetyl CoA. Together with the blocked TCA cycle, the levels of pyruvic acid and the other products of pyruvic acid, such as lactate and alanine, increased in our study. However, oxalacetate was not detected in neither the 0 h nor the 2 h post-injury serum. One possible reason maybe that the increased NADH/NAD + ratio will promote malic acid formation from oxalacetate. The other reason is that oxalacetate can be converted to phosphoethanopyruvic acid to form glucose. During the burn-hemorrhagic shock stress, the gluconeogenesis pathway increased under regulation of highly increased glucocorticoid; therefore, oxalacetate level decreased. As the increased alanine would inhibit pyruvate kinase, and increased citric acid can inhibit phosphfructokinase-1, the glycolysis level was supposed to be greatly inhibited in the burn-hemorrhagic shock combined injury. Stress hormones, such as epinephrine, are potent activators of adipose tissue lipolysis, and could mobilize lipid in fat tissue to release more free fatty acids into blood after the severe trauma. FFA uptaken into cells will be first changed to acyl CoA in cytosol, and then transported into mitochondria via the mitochondrial carnitine/acylcarnitine shuttle transport system, and β-oxidized to acetyl CoA. We detected significantly increased medium chain fatty acids and short-, medium- and long-chain acyl-carnitines, together with an exhausted free carnitine pool. It has been proposed that acute elevations in lipolysis responding to adrenergic stress may increase energy availability accomplishing an adaptive response . However, our data suggest that the acetyl CoA cannot enter TCA efficiently because of the TCA–ETC blockade. Then, the acyl-carnitines accumulated because of the inability to cycle and release free carnitine. Taken together, these data offer a convincing explanation as to why glucose and lipid taken up by tissue is not fully oxidized by mitochondria. In the injured group, significantly increased levels of succinic acid, oxoglutaric acid, as well as lysine are involved in carnitine synthesis pathway and are in agreement with the increased demand for L-carnitine synthesis, suggesting that L-carnitine supplementation can potentially maintain and restore normal metabolic levels. Downregulation of L-carnitine in serum has been reported to inhibit the activity of pyruvate dehydrogenase, causing the accumulation of pyruvic acid . In addition, it also indirectly affects amino acid metabolism . L-carnitine is now used clinically to alleviate hemorrhagic shock, reduce ischemia and hypoxia. It has also been reported that L-carnitine supplementation can reduce cellular and mitochondrial damage in the liver by maintaining CPT1 enzyme activity . Yet, it is important to note that speeding up of the TCA–ETC is indispensable to avoid the supplemented L-carnitine being trapped in the acyl-carnitine step. Previous studies showed that elevated concentrations of short- and long-chain acylcarnitines, which are also detected in our study (Table ), have been linked to insulin resistance. Together with above data of decreased glycolysis, increased gluconeogenesis, our results indicate the development of insulin resistance, providing a much detailed and comprehensive glucose and fat metabolite profile. Increased matrix protein Collagen is the most abundant protein in mammals, approximately accounting for one-third of the whole-body protein. It has been reported early in the 1960’s that the administration of cortisol to rats resulted in induction of collagenolytic and proteolytic activities in the extracellular compartment of the skin with marked and abrupt loss of cutaneous collagen . Thermal injury also destructed collagen into amino acids . Correspondingly, our proteomic examination found the type I collagen increased significantly in the serum 2 h post-injury compared with control and correlate with the degree of injury severity. Notably, the true identity of the “collagen” shown in the proteome may be not the complete length protein, but is the peptides produced from collagen breakdown during connective tissue destruction, and further mapped to collagen during LS–MS analysis. Glycine and hydroxyproline present in large amount in collagen; however, they were not found significantly increased in our metabolome data. These results suggest that the collagens may be degraded not only into amino acids as reported early , but also into peptide segments, especially during the acute phase within 2 h post-injury. Altered RBC-related proteins Recent studies have estimated that the adult human body contains 25 trillion circulating RBCs accounting for ~ 83% of all host cells. This makes RBCs a type of circulating organ critical for human health and the damage and hemolysis is supposed to play important roles in both physiological and pathological situations . In our injury model. We found important signs of hemolysis, which may influence the prognosis. Oxidative response is crucial for the maintenance of normal function and integrity of RBCs . Several diseases such as favism with deficiency of GADH activity have underlined their importance. Reactive oxygen species (ROS) can cause RBCs damage, but a diversity of antioxidant systems is known to protect and repair RBCs. The high-capacity redox systems in RBC also scavenge extracellular radicals and thus protects against radicals formed in the body as a whole .Under normal steady-state conditions, the very solid protection systems against ROS can cope with the threat. However, during the extensive oxidative stress induced by the severe burn and shock trauma, abundant ROS are released and significantly reduce the antioxidative capacity of RBCs and damage them. Massive hemolysis is subsequently induced releasing appreciable amounts of cytosol contents into the circulation. Each mature RBC contains ~ 250–270 million copies of hemoglobin, accounting for ~ 98% of the cytosolic proteome . This is in accordance with our proteomic data of presence of large amounts of free hemoglobin in blood in the combined injury. Other proteins that present abundantly in RBCs, such as BLVRB, glutathione transferase, peroxiredoxin, and carbonic anhydrase (CA) were also significantly increased 2 h post-injury. Antioxidant proteins are expressed in high abundance in mature RBCs to help regulate cellular redox. BLVRB is a general NADPH-dependent flavin reductase (FR) that reduces numerous substrates, and plays a critical role in regulating cellular redox. Wu et al. found BLVRB mRNA levels increased during erythropoiesis and Paukovich et al . detected high levels of BLVRB in mature RBCs through mass spectrometry analysis, confirming the importance of BLVRB in redox regulation . Glutathione transferase P1-1 (GSTP1-1) is abundant in mammalian RBCs and is exclusively expressed during erythropoiesis. GSTP1-1 detoxify a large variety of toxic compounds using glutathione or by acting as a ligandin, and is also involved in the oxidative stress. GSTP1-1 has been found overexpressed in the human erythrocyte in the case of increased blood toxicity, such as in healthy subjects living in polluted areas and is likely a defense response to increased blood toxicity . Peroxiredoxin 2 (Prdx2) is the third most abundant protein present in RBCs and acts as a quick sink of peroxides by reacting with them to generate a disulfide-linked dimer . By the correlation analysis, CA1 increment could predict the degree of injury severity suggesting a relationship between hemolysis and severity of injury. Similarly, previous researches have shown that free hemoglobin in burn blister fluid reflected burn severity . Taken together, the increases in RBC-related proteins in the serum 2 h post-injury support an important role of hemolysis in the progress of the combined injury. Collagen is the most abundant protein in mammals, approximately accounting for one-third of the whole-body protein. It has been reported early in the 1960’s that the administration of cortisol to rats resulted in induction of collagenolytic and proteolytic activities in the extracellular compartment of the skin with marked and abrupt loss of cutaneous collagen . Thermal injury also destructed collagen into amino acids . Correspondingly, our proteomic examination found the type I collagen increased significantly in the serum 2 h post-injury compared with control and correlate with the degree of injury severity. Notably, the true identity of the “collagen” shown in the proteome may be not the complete length protein, but is the peptides produced from collagen breakdown during connective tissue destruction, and further mapped to collagen during LS–MS analysis. Glycine and hydroxyproline present in large amount in collagen; however, they were not found significantly increased in our metabolome data. These results suggest that the collagens may be degraded not only into amino acids as reported early , but also into peptide segments, especially during the acute phase within 2 h post-injury. Recent studies have estimated that the adult human body contains 25 trillion circulating RBCs accounting for ~ 83% of all host cells. This makes RBCs a type of circulating organ critical for human health and the damage and hemolysis is supposed to play important roles in both physiological and pathological situations . In our injury model. We found important signs of hemolysis, which may influence the prognosis. Oxidative response is crucial for the maintenance of normal function and integrity of RBCs . Several diseases such as favism with deficiency of GADH activity have underlined their importance. Reactive oxygen species (ROS) can cause RBCs damage, but a diversity of antioxidant systems is known to protect and repair RBCs. The high-capacity redox systems in RBC also scavenge extracellular radicals and thus protects against radicals formed in the body as a whole .Under normal steady-state conditions, the very solid protection systems against ROS can cope with the threat. However, during the extensive oxidative stress induced by the severe burn and shock trauma, abundant ROS are released and significantly reduce the antioxidative capacity of RBCs and damage them. Massive hemolysis is subsequently induced releasing appreciable amounts of cytosol contents into the circulation. Each mature RBC contains ~ 250–270 million copies of hemoglobin, accounting for ~ 98% of the cytosolic proteome . This is in accordance with our proteomic data of presence of large amounts of free hemoglobin in blood in the combined injury. Other proteins that present abundantly in RBCs, such as BLVRB, glutathione transferase, peroxiredoxin, and carbonic anhydrase (CA) were also significantly increased 2 h post-injury. Antioxidant proteins are expressed in high abundance in mature RBCs to help regulate cellular redox. BLVRB is a general NADPH-dependent flavin reductase (FR) that reduces numerous substrates, and plays a critical role in regulating cellular redox. Wu et al. found BLVRB mRNA levels increased during erythropoiesis and Paukovich et al . detected high levels of BLVRB in mature RBCs through mass spectrometry analysis, confirming the importance of BLVRB in redox regulation . Glutathione transferase P1-1 (GSTP1-1) is abundant in mammalian RBCs and is exclusively expressed during erythropoiesis. GSTP1-1 detoxify a large variety of toxic compounds using glutathione or by acting as a ligandin, and is also involved in the oxidative stress. GSTP1-1 has been found overexpressed in the human erythrocyte in the case of increased blood toxicity, such as in healthy subjects living in polluted areas and is likely a defense response to increased blood toxicity . Peroxiredoxin 2 (Prdx2) is the third most abundant protein present in RBCs and acts as a quick sink of peroxides by reacting with them to generate a disulfide-linked dimer . By the correlation analysis, CA1 increment could predict the degree of injury severity suggesting a relationship between hemolysis and severity of injury. Similarly, previous researches have shown that free hemoglobin in burn blister fluid reflected burn severity . Taken together, the increases in RBC-related proteins in the serum 2 h post-injury support an important role of hemolysis in the progress of the combined injury. For historical reasons such as the lack of a sufficiently sized cohort for standard statistical analysis, there is currently no clinical biomarker to predict fatal outcome for lethal burn-hemorrhagic shock combined injury, both in military and civilian populations. We constructed a swine model for burn-hemorrhagic shock combined injury and found that the markedly elevated succinic acid, glutaric acid, and malic acid are closely associated with injury severity, and can efficiently discriminated between mild, moderate and severe disease. Succinic acid, glutaric acid, and malic acid exist in all nucleated cells and they are indeed a likely biomarker that can be used to predict fatal outcome in lethal injury. In summary, this study performed an integrated analysis of metabolomic and proteomic data based on the same biological samples from pigs with burn-hemorrhagic shock combined injury. By analyzing the metabolites and protein changes in the acute phase of burn-shock swine model, we are now able to provide a signature and diagnosis proteins and metabolites panel. This panel can predict injury severity and provide a theoretical basis for early detection of patients who may deteriorate, and reduce the risk of mortality and serious complications of burn/shock combined injury patients. Despite the difficulties in establishing the burn-hemorrhage shock model and collecting blood samples in the immediate early time post-injury, this important study aids our understanding of how the extreme trauma affects metabolism and multiple pathological processes.
Anti-inflammatory and antioxidant effects of nanoformulations composed of metal-organic frameworks delivering rutin and/or piperine natural agents
861b2f1b-9f55-4a17-94a4-51c8c61b71f1
8280904
Pharmacology[mh]
Drug delivery systems (DDSs) are the main nanomedicine platforms in modern pharmaceutical research, application, and development used to control release of therapeutic agents, reduce side effects, increase bioavailability and solubility, enhance targeting, and improve therapeutic activity. Building an effective DDS requires a suitable drug vehicle/carrier, which generally fall into two types, those for organic materials (i.e. chitosan nanoparticles, liposomes) and those for inorganic materials (i.e. mesoporous silica nanoparticles) (Horcajada et al., ). A third, hybrid type, the inorganic–organic route, has emerged in recent decades and relies on metal–organic frameworks (MOFs) for drug delivery (Horcajada et al., ). MOF solids are somewhat new and have highly specific porosity and surface areas, offering the potential for many different uses, including medical applications (Chedid & Yassin, ). The first synthesis report for MOFs was published in 1989 (Hoskins & Robson, ; Horcajada et al., ). They consist of a crystalline network of metal-in-metal clusters or single metal ions associated with organic linkers that are strongly covalently bonded (Domingos et al., ; Pettinari et al., ) and also have been characterized as coordination polymers containing metal ions centrally, with organic linkers, or as porous coordination networks (Janiak & Vieth, ; Gangu et al., ; Santos et al., ). These frameworks have several notable features, including that their surface area is highly specific, the size of their pores is adjustable, and their porosity is tunable. In addition, they are flexible, low density, and thermally stable, with a distinct ordered structure, versatile functionality, good biocompatibility, and low toxicity. MOFs also can be developed using various metals and linkers (Shearer et al., ; Zhao ; Pettinari et al., ; Han et al., ; Rojas et al., ; Su et al., ; Liu et al., ). Drug-loading capacity is an important prerequisite for the applicability of a DDS drug carrier. MOFs show efficient drug-loading capacity for various candidates, such as the anticancer drugs 5-fluorouracil (∼28 wt.%; Hu et al., ) and doxorubicin (DOX; ∼10 wt.%; Liang et al., ), caffeine (∼15–50 wt.%; Horcajada et al., ; Cunha et al., , ; Chevreau et al., ), magnolol (∼72%; Santos et al., ), ibuprofen (>49%; Lu et al., ), gentamicin antibiotic (19 wt.%; Soltani et al., ), and thymol essential component (∼4%; Wu et al., ). Several studies suggest that MOFs of various types show great promise in drug delivery of, e.g. anticancer, anti-inflammatory, and antibacterial agents (Dong et al., ; Li et al., ; Nasrabadi et al., ; Abánades Lázaro et al., ; Lu et al., ). An important and unique feature of MOFs as carriers is their low toxicity and their enhancement of bioavailability and solubility of the drug. For example, Santos et al. reported that a Zr-based MOF enhances the bioavailability of magnolol (which is poorly soluble) and that the magnolol-loaded MOF exerted no toxicity at 2000 mg/kg in female Sprague–Dawley rats. Natural agents remain a valuable source of medicines, and hundreds of promising agents likely remain to be discovered and evaluated for human diseases. The advantages of natural agents are their potentially greater safety, cost-effectiveness, and pharmacological versatility (Harvey, ; Atanasov et al., ). Common disadvantages to their use are poor bioavailability and water solubility, lack of targeting specificity, and difficulty with achieving controlled release. Nanomedicine technology offers a potential solution to these problems. Here, two plant-derived agents, piperine (an amide alkaloid also known as Pip) and rutin (a flavonoid known as Ru), are of special interest. Black pepper, a common spice, can yield about 6–9% pure Pip from its fruits (Damanhouri & Ahmad, ; Gorgani et al., ). Pip shows potential in exerting anti-inflammatory (Bang et al., ), neuroprotective (Yang et al., ), antioxidant (Selvendiran et al., ), and anti-tumor (Selvendiran et al., ; Do et al., ; Samykutty et al., ; Yaffe et al., ; Gunasekaran et al., ; Si et al., ; Yoo et al., ) effects and may enhance drug bioavailability (Shoba et al., ; Kasibhatta & Naidu, ). Ru, given technically as 3,30,40,5,7-pentahydroxyflavone-3-rhamnoglucoside, occurs abundantly in many plants, e.g. buckwheat, tea, citrus, and apple (Harborne, ). Its potential activities include anti-inflammatory, antioxidant, anti-bacterial, anti-cancer, neuroprotective, and cardioprotective effects (Guardia et al., ; Annapurna et al., ; Khan et al., ; Alonso-Castro et al., ; Al-Rejaie et al., ; Kamel et al., ; Ganeshpurkar & Saluja, ). Despite these fascinating pharmacological effects in in vitro and pre-clinical studies, Ru and Pip have yet to be examined in clinical investigations, primarily because of the inherent limitations of such agents (e.g. solubility, bioavailability, site-specific targeting, and bioavailability). No delivery systems for Ru and/or Pip using MOFs have, to our knowledge, been published. Here, we describe a novel delivery system we designed that contains two types of MOFs, one zirconium-based (Zr-MOFs) and the other titanium-based (Ti-MOFs), loaded with Ru and/or Pip to yield various nanoformulations . As the loading capacity of any drug or therapeutic agent is important for determining its activity and release, we also evaluated these constructs for high-loading capacity and the ability to co-deliver two drugs. In a final step, we evaluated whether the nanoformulations enhanced therapeutic efficiency compared with the effects of the free natural agents. In the in vivo studies, we found enhanced anti-inflammatory and antioxidant activities of the nanoformulation with each natural agent compared with either free agent alone. Synthesis of ZrMOF (UiO-66-COOH) A 100 mL reaction kettle was charged with 1,2,4-benzene tricarboxylic acid (0.424 g), ZrCl 4 (0.463 g), DMF (10 mL), demineralized water (8.8 mL), and acetic acid (12.5 mL). The reaction was carried out at 100 °C for 24 h. MOFs were separated from the reaction solution by centrifugation (10,000 rpm, 5 min × 3) and washing with methanol. Finally, the drying process was carried out under vacuum at 55 °C, and UiO-66 was obtained (Abdelhameed et al., ; Li et al., ). Synthesis of TiMOF (MIL-125-NH 2 ) The TiMOF was prepared according to Abdelhameed et al. as follows: 2-aminoterephthalic acid (1 g, 5.5 mmol) was dissolved in mixture of DMF/methanol (2:1, v/v). To the mixture, titanium isopropoxide (1 mL, 3.38 mmol) was added at room temperature (RT) under continuous stirring. The mixture was then kept for 24 h at 150 °C. After the solvo-thermal process, the slurry was converted to yellowish precipitate which was isolated by filtration and then washed by DMF followed by methanol. The isolated precipitate was dried under vacuum to obtain TiMOF powder. Surface modification of MOFs ZrMOF and TiMOF materials were functionalized with silane TS groups through post-synthesis route . Typically, 0.5 g of MOFs was suspended in 50 mL anhydrous toluene (POCH, Gliwice, Poland) by use of sonication (water bath sonicator; Elma GmbH, Singen, Germany) for 10 minutes. Afterward, the TS silane (tert-butyl(chloro) diphenyl-silane 98%; Cross Organics, Geel, Belgium) was added to the solution drop by drop under vigorous stirring, followed by adjustment of the stirring speed to 300 rpm and maintenance of the solution at RT for 24 hours. We then washed and filtered the solution with methanol three or four times (Fisher Scientific, Loughborough, UK) and deionized water to remove un-reacted TS silane molecules with MOF particles. In a final step, the materials were dried for 24 hours at 60 °C. The resulting functionalized MOFs were designated as ZrMOFTS and TiMOFTS. Preparation of nanoformulations Ru was isolated from plant material according to Zhu et al. . For the isolation, 2 kg of Punica granatum dried powder peel was extracted in aqueous methanol (80%) three times. After evaporation of the combined extracts at 45 °C in vacuo , there was about ∼100 g of a dark brown residue. For the initial separation, we used hexane, CH 2 Cl 2 , EtOAc, and BuOH for liquid–liquid extraction of the crude extract. To purify the EtOAc fraction and yield Ru, we used silica gel column chromatography and Sephadex LH-20. The purity of isolated Ru was identified by NMR and HPLC techniques and the data are placed in supplementary material . We purchased Pip from Sigma-Aldrich (St. Louis, MO). Ru or Pip or both were loaded to the functionalized ZrMOFTS and TiMOFTS in single or dual loadings. The drug:MOF ratio was 1:2, and for dual loading, the Ru:Pip ratio was fixed at 1:1. In a typical experiment, 100 mg of Ru or Pip (single loading) or 50 mg of Ru plus 50 mg of Pip (dual loading) was dissolved in ethanol (15 mL), followed by addition to the solution of 200 mg of ZrMOFTS or TiMOFTS. After stirring (200 rpm) at RT for 24 hours, the solution was transferred to a round flask, evaporated at 60 °C in a Rotavap (Büchi, Flawil, Switzerland), and the resulting powder resuspended in deionized water, followed by another evaporation to remove unloaded Ru or Pip. This step was repeated once more to ensure removal of unloaded agent. To yield the final nanoformulations, the resulting materials were dried for 12 hours at 60 °C in an oven. The drug-loaded MOF nanoformulations were designated as ZrMOFTS-Ru, ZrMOFTS-Pip, ZrMOFTS-Ru-Pip, TiMOFTS-Ru, TiMOFTS-Pip, and TiMOFTS-Ru-Pip. Characterization To observe the morphology of the materials, we used field-emission (FE) scanning electron microscopy (SEM) (Ultra Plus, Zeiss, Jena, Germany) at 3 kV with different magnifications. Before imaging, the materials were sputter coated with gold–palladium (Bal-Tech SCD 005, Balzers, Liechtenstein). To record the crystalline patterns of the materials, we used powder X-ray diffraction (XRD) (X’PertPRO System, PANalytical, Almelo, Netherlands), with CuKα radiation (40 mA and 40 kV; 2 θ range of 5–100). For identifying the functional groups on the material surface, we used Fourier transform infrared (FTIR) spectroscopy (Bruker Optics Tensor 27, Bruker Corporation, Billerica, MA) with attenuated total reflectance (ATR, Platinium ATR-Einheit A 255, Bruker, Karlsruhe, Germany). ATR-FTIR spectra were performed at 400–4000 cm −1 range, with spectral resolution of 1 cm −1 . For the simultaneous thermal analysis (STA), which was coupled with differential scanning, we used the STA 499 F1Jupiter (NETZSCH-Feinmahltechnik GmbH, Selb, Germany). Measurements were done in the temperature range RT-800 °C in a gas mixture of helium and synthetic air flowing through the furnace chamber. Before starting the experiment, the chamber was purged for 10 minutes with the same gas mixture. A similar amount of sample, approximately 10 mg, was used for all experiments. We determined the zeta potential with a NanoZS Malvern ZetaSizer (Malvern, UK) by creating a suspension of the materials in deionized water (Hydrolab, Straszyn, Poland) at 1 mg/mL, adjusted to ∼ pH 7.4 and measured at 23.5 °C. For the particle size distributions, we analyzed materials using measurements derived from dynamic light scattering (DLS) recorded at RT. Particle size and polydispersity index (PDI) for the materials were determined using the DLS technique, following reconstitution in distilled water. CNH contents were determined by LECO CHNS-932 element analyzer (Leco-Corporation, St. Joseph, MI). The metal concentrations on prepared materials were analyzed with an atomic absorption spectrophotometer (Perkin-Elmer Analyst 200 AAS, Waltham, MA). Entrapment efficiency (EE) and total drug content in nanoformulations EE : Accurately weighed nanoformulations (5 mg) were suspended in 10 mL of ethanol. Nanoformulations were centrifuged at 25,000 rpm for 30 minutes at 4 °C using a high-speed cooling ultracentrifuge (Sigma 3-30KS, Sigma Laborzentrifugen GmbH, Osterode am Harz, Germany). After centrifugation, the supernatant was drawn off and analyzed in a UV–visible spectrophotometer (Shimadzu 1800, Kyoto, Japan), with the corresponding λ max of the respective active ingredient (357 nm for Ru and 342 nm for Pip). Entrapment efficiency and total drug content were determined using the following formula, according to our previous work (AbouAitah et al., ): EE = initial amount of Ru or Pip ( theoretically calculated ) – amount of free Ru or Pip ( actually measured amount ) in the supernatant / Initial amount of Ru or Pip ( theoretically calculated ) × 100. (1) For calculating the loading capacity and total content, we dissolved the 5-mg nanoformulation in 5 mL ethanol, followed by stirring for three hours to forward extraction of the natural agents from the MOF nanoformulation. The solution was then filtered through an Axiva syringe filter (0.2 µm) to exclude MOF particles. For determining Ru or Pip concentration in the samples, we used UV–visible spectrophotometry with the corresponding λ max of each natural agent. We calculated the percentage loading capacity and total content as follows: Total loading content = amount of Ru or Pip entrapped / total weight of MOF carrier × 100 (2) Total loading capacity = experimental Ru or Pip content / theoretical content for each × 100 (3) In vitro release studies The releasing properties of the nanoformulations were evaluated using a modified dialysis bag diffusion technique according to our previous work on Pip (AbouAitah et al., ). Briefly, the weighed amount of each nanoformulation (5 mg) was transferred to a cellulose dialysis bag (Sigma-Aldrich CHEMIE GmbH, Taufkirchen, Germany) containing 5 mL PBS buffer as the release medium. After the bag was sealed, it was immersed in a glass bottle filled with 50 mL of PBS (pH 7.4), and the bottle was closed. Nanoformulations prepared based on non-modified or silane-modified MOFs were placed in a constant-temperature (37 °C) shaking incubator (GFL 3032, Gesellschaft für labortechnik GmbH, Burgwedel, Germany) at 150 rpm. At specified intervals (1, 2, 3, 4, 5, 6, 8, 12, 24, 36, and 48 hours), we collected a 0.5-mL aliquot of release medium and replaced it with fresh buffer in an equal volume. Before measurements were taken, the solutions were passed through a 0.45-mm Millipore filter. The average cumulative percent of released drug from each nanoformulation was analyzed in triplicate via spectrophotometry. The filtered solutions containing Ru and Pip were measured using a UV–visible spectrophotometer. Co-delivery nanoformulations containing both Ru and Pip were measured once for Ru and once for Pip to characterize the release for each agent from the co-delivery nanoformulations. To analyze the kinetics of the release, we used KineDS3 software (developed at Jagiellonian University, Krakow, Poland), fitting the data to different kinetic models using non-linear and linear regressions. In vivo pharmacodynamic studies Anti-inflammatory experiment For the animal studies, we purchased male albino Wistar rats (∼250 ± 50 g; National Research Centre, Giza, Egypt). Animal studies were conducted in keeping with the ethical standards of the pharmacology unit and received approval from the ethics committee of the National Research Centre. Thus, no ethical approval was obtained for the current work. Animals were allocated into 12 groups of eight rats each. Results were calculated as mean values ± SD. Carrageenan–kaolin-induced paw edema in rats A blend was prepared consisting of 20% (w/v) kaolin suspension and 1% (w/v) carrageenan, both in saline (Sigma Aldrich, St. Louis, MO). We followed Sur et al. , with minor changes. To induce inflammation, rats were subcutaneously injected on the plantar side of each right hind paw with 0.2 mL of the mixture suspended in normal saline. As described, rats were allocated randomly into 12 groups of eight animals each . Control group (C) animals received normal saline at 3 mL/kg of body weight by intraperitoneal injection. Animals in the standard (STD) group received an injection of diclofenac (Novartis, Rueil-Malmaison, France), an anti-inflammatory, at 100 mg/kg of body weight, injected intraperitoneally. The three reference groups (Ref1, Ref2, and Ref3 (mixture of Ref1 and Ref2)) were treated by injection of Ru or Pip or mixture of Ru and Pip intraperitoneally at 100 mg/kg of body weight. Test groups (G1–G8, G4*, and G8*) were injected with different MOF formulations, as shown in . Rats were pretreated with different groups for 30 minutes before administration of the carrageenan/kaolin mixture in a single dose. We used a plethysmometer (Panlab, Cornellà de Llobregat, Spain) to measure paw diameter before the stimulus was injected (zero time) and at 1, 2, 3, 4, 5, 6, 8, and 12 hours after the injection, then after 24, 36, and 48 hours. Readings are reported as average variation in paw volume (mL), calculated based on change from the basal value. Results were expressed as the mean percentage of edema inhibition, calculated as follows (Ojewole, ): (4) % edema inhibition = Edema increase in Control – [ ( edema increase in test group ) / edema increase in Control ] × 100 Leukocyte migration assay (Azza & Oudghiri, ) Subcutaneous air pouches (20 mL sterile air) were formed on the dorsal thorax of all groups ( n = 8), as described by Haqqi et al. , with some changes. Three days later, 0.5 mL of the carrageenan/kaolin suspension was injected into the resulting cavity in rats of all groups except for control animals in group C, who were injected with 0.9% w/v NaCl. Rats in Ref1, Ref2, and Ref3 were administered each of the pure natural agents intraperitoneally at a total of 100 mg/kg of body weight, and the test groups respectively received intraperitoneal injections of the formulations given in . Control group C was administered normal saline intraperitoneally at 3 mL/kg of body weight, and the STD group received intraperitoneal diclofenac at 100 mg/kg of body weight. All treatments and control solutions were administered in one dose. At each planned timepoint (at each hour of the first six hours then at 8, 12, 24, 36, and 48 hours), for each animal we injected 5 mL of ice-cold saline solution (0.9%w/v NaCl) into each formed cavity, then collected a sample for counting leukocytes. In vivo evaluation of antioxidant activity Experimental design and animal exposures Rats (male albino Wistar, 200 ± 50 g) were kept in plastic cages (six rats/cage) at RT, with access to standard diet and water. Animals were randomly divided into groups as described for the anti-inflammatory experiments, with the following modification: group C (negative control) consisted of untreated rats receiving distilled water for 21 days, and group STD received 100 mg/kg of vitamin C for 21 days. All other groups received a dose of 100 mg/kg of the indicated formulations for each group for 21 days. All doses were administered intraperitoneally as a single dose after suspension in distilled water at 1 mL/100 g of body weight. At each time interval, we used diethyl ether to anesthetize animals intended for sampling. For obtaining blood samples, we created a retro-orbital puncture and collected the blood into heparinized tubes, which were centrifuged at 4 °C for 15 minutes at 15,000 rpm to separate plasma. The resulting plasma was stored at −20 °C for use in the reducing power and DPPH (2,2-diphenyl-1-picrylhydrazyl) assays (Merghem et al., ). Plasma antioxidant capacity using DPPH radical determination We followed Hasani et al. , with a few modifications, to evaluate the ability of the sampled plasma to scavenge DPPH radicals. In brief, a total of 50 μL of plasma were placed in 1250 μL of a solution of DPPH in methanol (2.4 mg/100 mL methanol), followed by incubation in the dark for 30 minutes. After centrifugation and spectrophotometric analysis, we calculated the plasma antioxidant capacity as follows: Radical scavenging activity = [ ( Ablank – Asample ) / Ablank ] × 100 (5) Plasma reducing assessed as ferric-reducing antioxidant power To determine the reducing power of the plasma samples, we followed Narayanaswamy & Balakrishnan to assay sample antioxidant abilities through formation of a colored complex with potassium ferricyanide (ferric-reducing antioxidant power, or FRAP). One milliliter of plasma was mixed with 0.5 mL each of potassium ferricyanide (1% w/v) and phosphate buffer (0.2 M, pH 6.6), followed by an incubation for 20 minutes at 50 °C. The reaction was terminated by addition of trichloroacetic acid (10% w/v), followed by a 10-minute centrifugation at 3000 rpm. Distilled water and 0.1 mL FeCl3 (0.1% w/v) were used to dilute 0.5 mL of the supernatant. Five minutes later, samples were then analyzed spectrophotometrically, with a higher absorbance indicating greater reducing power. Statistical analysis We analyzed the data using SPSS (Chicago, IL) and give results as means (±standard deviation (SD)) in all tables and figures related to in vitro release. For drug-loading content and EE, the data were analyzed with one-way analysis of variance (ANOVA; p <.05 with least significant differences). We used Student’s t -test or, where more than two groups were compared, one-way ANOVA to compare differences between or among groups in the in vivo portions of the study (statistical significance set at p <.05). A 100 mL reaction kettle was charged with 1,2,4-benzene tricarboxylic acid (0.424 g), ZrCl 4 (0.463 g), DMF (10 mL), demineralized water (8.8 mL), and acetic acid (12.5 mL). The reaction was carried out at 100 °C for 24 h. MOFs were separated from the reaction solution by centrifugation (10,000 rpm, 5 min × 3) and washing with methanol. Finally, the drying process was carried out under vacuum at 55 °C, and UiO-66 was obtained (Abdelhameed et al., ; Li et al., ). 2 ) The TiMOF was prepared according to Abdelhameed et al. as follows: 2-aminoterephthalic acid (1 g, 5.5 mmol) was dissolved in mixture of DMF/methanol (2:1, v/v). To the mixture, titanium isopropoxide (1 mL, 3.38 mmol) was added at room temperature (RT) under continuous stirring. The mixture was then kept for 24 h at 150 °C. After the solvo-thermal process, the slurry was converted to yellowish precipitate which was isolated by filtration and then washed by DMF followed by methanol. The isolated precipitate was dried under vacuum to obtain TiMOF powder. ZrMOF and TiMOF materials were functionalized with silane TS groups through post-synthesis route . Typically, 0.5 g of MOFs was suspended in 50 mL anhydrous toluene (POCH, Gliwice, Poland) by use of sonication (water bath sonicator; Elma GmbH, Singen, Germany) for 10 minutes. Afterward, the TS silane (tert-butyl(chloro) diphenyl-silane 98%; Cross Organics, Geel, Belgium) was added to the solution drop by drop under vigorous stirring, followed by adjustment of the stirring speed to 300 rpm and maintenance of the solution at RT for 24 hours. We then washed and filtered the solution with methanol three or four times (Fisher Scientific, Loughborough, UK) and deionized water to remove un-reacted TS silane molecules with MOF particles. In a final step, the materials were dried for 24 hours at 60 °C. The resulting functionalized MOFs were designated as ZrMOFTS and TiMOFTS. Ru was isolated from plant material according to Zhu et al. . For the isolation, 2 kg of Punica granatum dried powder peel was extracted in aqueous methanol (80%) three times. After evaporation of the combined extracts at 45 °C in vacuo , there was about ∼100 g of a dark brown residue. For the initial separation, we used hexane, CH 2 Cl 2 , EtOAc, and BuOH for liquid–liquid extraction of the crude extract. To purify the EtOAc fraction and yield Ru, we used silica gel column chromatography and Sephadex LH-20. The purity of isolated Ru was identified by NMR and HPLC techniques and the data are placed in supplementary material . We purchased Pip from Sigma-Aldrich (St. Louis, MO). Ru or Pip or both were loaded to the functionalized ZrMOFTS and TiMOFTS in single or dual loadings. The drug:MOF ratio was 1:2, and for dual loading, the Ru:Pip ratio was fixed at 1:1. In a typical experiment, 100 mg of Ru or Pip (single loading) or 50 mg of Ru plus 50 mg of Pip (dual loading) was dissolved in ethanol (15 mL), followed by addition to the solution of 200 mg of ZrMOFTS or TiMOFTS. After stirring (200 rpm) at RT for 24 hours, the solution was transferred to a round flask, evaporated at 60 °C in a Rotavap (Büchi, Flawil, Switzerland), and the resulting powder resuspended in deionized water, followed by another evaporation to remove unloaded Ru or Pip. This step was repeated once more to ensure removal of unloaded agent. To yield the final nanoformulations, the resulting materials were dried for 12 hours at 60 °C in an oven. The drug-loaded MOF nanoformulations were designated as ZrMOFTS-Ru, ZrMOFTS-Pip, ZrMOFTS-Ru-Pip, TiMOFTS-Ru, TiMOFTS-Pip, and TiMOFTS-Ru-Pip. To observe the morphology of the materials, we used field-emission (FE) scanning electron microscopy (SEM) (Ultra Plus, Zeiss, Jena, Germany) at 3 kV with different magnifications. Before imaging, the materials were sputter coated with gold–palladium (Bal-Tech SCD 005, Balzers, Liechtenstein). To record the crystalline patterns of the materials, we used powder X-ray diffraction (XRD) (X’PertPRO System, PANalytical, Almelo, Netherlands), with CuKα radiation (40 mA and 40 kV; 2 θ range of 5–100). For identifying the functional groups on the material surface, we used Fourier transform infrared (FTIR) spectroscopy (Bruker Optics Tensor 27, Bruker Corporation, Billerica, MA) with attenuated total reflectance (ATR, Platinium ATR-Einheit A 255, Bruker, Karlsruhe, Germany). ATR-FTIR spectra were performed at 400–4000 cm −1 range, with spectral resolution of 1 cm −1 . For the simultaneous thermal analysis (STA), which was coupled with differential scanning, we used the STA 499 F1Jupiter (NETZSCH-Feinmahltechnik GmbH, Selb, Germany). Measurements were done in the temperature range RT-800 °C in a gas mixture of helium and synthetic air flowing through the furnace chamber. Before starting the experiment, the chamber was purged for 10 minutes with the same gas mixture. A similar amount of sample, approximately 10 mg, was used for all experiments. We determined the zeta potential with a NanoZS Malvern ZetaSizer (Malvern, UK) by creating a suspension of the materials in deionized water (Hydrolab, Straszyn, Poland) at 1 mg/mL, adjusted to ∼ pH 7.4 and measured at 23.5 °C. For the particle size distributions, we analyzed materials using measurements derived from dynamic light scattering (DLS) recorded at RT. Particle size and polydispersity index (PDI) for the materials were determined using the DLS technique, following reconstitution in distilled water. CNH contents were determined by LECO CHNS-932 element analyzer (Leco-Corporation, St. Joseph, MI). The metal concentrations on prepared materials were analyzed with an atomic absorption spectrophotometer (Perkin-Elmer Analyst 200 AAS, Waltham, MA). EE : Accurately weighed nanoformulations (5 mg) were suspended in 10 mL of ethanol. Nanoformulations were centrifuged at 25,000 rpm for 30 minutes at 4 °C using a high-speed cooling ultracentrifuge (Sigma 3-30KS, Sigma Laborzentrifugen GmbH, Osterode am Harz, Germany). After centrifugation, the supernatant was drawn off and analyzed in a UV–visible spectrophotometer (Shimadzu 1800, Kyoto, Japan), with the corresponding λ max of the respective active ingredient (357 nm for Ru and 342 nm for Pip). Entrapment efficiency and total drug content were determined using the following formula, according to our previous work (AbouAitah et al., ): EE = initial amount of Ru or Pip ( theoretically calculated ) – amount of free Ru or Pip ( actually measured amount ) in the supernatant / Initial amount of Ru or Pip ( theoretically calculated ) × 100. (1) For calculating the loading capacity and total content, we dissolved the 5-mg nanoformulation in 5 mL ethanol, followed by stirring for three hours to forward extraction of the natural agents from the MOF nanoformulation. The solution was then filtered through an Axiva syringe filter (0.2 µm) to exclude MOF particles. For determining Ru or Pip concentration in the samples, we used UV–visible spectrophotometry with the corresponding λ max of each natural agent. We calculated the percentage loading capacity and total content as follows: Total loading content = amount of Ru or Pip entrapped / total weight of MOF carrier × 100 (2) Total loading capacity = experimental Ru or Pip content / theoretical content for each × 100 (3) release studies The releasing properties of the nanoformulations were evaluated using a modified dialysis bag diffusion technique according to our previous work on Pip (AbouAitah et al., ). Briefly, the weighed amount of each nanoformulation (5 mg) was transferred to a cellulose dialysis bag (Sigma-Aldrich CHEMIE GmbH, Taufkirchen, Germany) containing 5 mL PBS buffer as the release medium. After the bag was sealed, it was immersed in a glass bottle filled with 50 mL of PBS (pH 7.4), and the bottle was closed. Nanoformulations prepared based on non-modified or silane-modified MOFs were placed in a constant-temperature (37 °C) shaking incubator (GFL 3032, Gesellschaft für labortechnik GmbH, Burgwedel, Germany) at 150 rpm. At specified intervals (1, 2, 3, 4, 5, 6, 8, 12, 24, 36, and 48 hours), we collected a 0.5-mL aliquot of release medium and replaced it with fresh buffer in an equal volume. Before measurements were taken, the solutions were passed through a 0.45-mm Millipore filter. The average cumulative percent of released drug from each nanoformulation was analyzed in triplicate via spectrophotometry. The filtered solutions containing Ru and Pip were measured using a UV–visible spectrophotometer. Co-delivery nanoformulations containing both Ru and Pip were measured once for Ru and once for Pip to characterize the release for each agent from the co-delivery nanoformulations. To analyze the kinetics of the release, we used KineDS3 software (developed at Jagiellonian University, Krakow, Poland), fitting the data to different kinetic models using non-linear and linear regressions. pharmacodynamic studies Anti-inflammatory experiment For the animal studies, we purchased male albino Wistar rats (∼250 ± 50 g; National Research Centre, Giza, Egypt). Animal studies were conducted in keeping with the ethical standards of the pharmacology unit and received approval from the ethics committee of the National Research Centre. Thus, no ethical approval was obtained for the current work. Animals were allocated into 12 groups of eight rats each. Results were calculated as mean values ± SD. Carrageenan–kaolin-induced paw edema in rats A blend was prepared consisting of 20% (w/v) kaolin suspension and 1% (w/v) carrageenan, both in saline (Sigma Aldrich, St. Louis, MO). We followed Sur et al. , with minor changes. To induce inflammation, rats were subcutaneously injected on the plantar side of each right hind paw with 0.2 mL of the mixture suspended in normal saline. As described, rats were allocated randomly into 12 groups of eight animals each . Control group (C) animals received normal saline at 3 mL/kg of body weight by intraperitoneal injection. Animals in the standard (STD) group received an injection of diclofenac (Novartis, Rueil-Malmaison, France), an anti-inflammatory, at 100 mg/kg of body weight, injected intraperitoneally. The three reference groups (Ref1, Ref2, and Ref3 (mixture of Ref1 and Ref2)) were treated by injection of Ru or Pip or mixture of Ru and Pip intraperitoneally at 100 mg/kg of body weight. Test groups (G1–G8, G4*, and G8*) were injected with different MOF formulations, as shown in . Rats were pretreated with different groups for 30 minutes before administration of the carrageenan/kaolin mixture in a single dose. We used a plethysmometer (Panlab, Cornellà de Llobregat, Spain) to measure paw diameter before the stimulus was injected (zero time) and at 1, 2, 3, 4, 5, 6, 8, and 12 hours after the injection, then after 24, 36, and 48 hours. Readings are reported as average variation in paw volume (mL), calculated based on change from the basal value. Results were expressed as the mean percentage of edema inhibition, calculated as follows (Ojewole, ): (4) % edema inhibition = Edema increase in Control – [ ( edema increase in test group ) / edema increase in Control ] × 100 For the animal studies, we purchased male albino Wistar rats (∼250 ± 50 g; National Research Centre, Giza, Egypt). Animal studies were conducted in keeping with the ethical standards of the pharmacology unit and received approval from the ethics committee of the National Research Centre. Thus, no ethical approval was obtained for the current work. Animals were allocated into 12 groups of eight rats each. Results were calculated as mean values ± SD. A blend was prepared consisting of 20% (w/v) kaolin suspension and 1% (w/v) carrageenan, both in saline (Sigma Aldrich, St. Louis, MO). We followed Sur et al. , with minor changes. To induce inflammation, rats were subcutaneously injected on the plantar side of each right hind paw with 0.2 mL of the mixture suspended in normal saline. As described, rats were allocated randomly into 12 groups of eight animals each . Control group (C) animals received normal saline at 3 mL/kg of body weight by intraperitoneal injection. Animals in the standard (STD) group received an injection of diclofenac (Novartis, Rueil-Malmaison, France), an anti-inflammatory, at 100 mg/kg of body weight, injected intraperitoneally. The three reference groups (Ref1, Ref2, and Ref3 (mixture of Ref1 and Ref2)) were treated by injection of Ru or Pip or mixture of Ru and Pip intraperitoneally at 100 mg/kg of body weight. Test groups (G1–G8, G4*, and G8*) were injected with different MOF formulations, as shown in . Rats were pretreated with different groups for 30 minutes before administration of the carrageenan/kaolin mixture in a single dose. We used a plethysmometer (Panlab, Cornellà de Llobregat, Spain) to measure paw diameter before the stimulus was injected (zero time) and at 1, 2, 3, 4, 5, 6, 8, and 12 hours after the injection, then after 24, 36, and 48 hours. Readings are reported as average variation in paw volume (mL), calculated based on change from the basal value. Results were expressed as the mean percentage of edema inhibition, calculated as follows (Ojewole, ): (4) % edema inhibition = Edema increase in Control – [ ( edema increase in test group ) / edema increase in Control ] × 100 ) Subcutaneous air pouches (20 mL sterile air) were formed on the dorsal thorax of all groups ( n = 8), as described by Haqqi et al. , with some changes. Three days later, 0.5 mL of the carrageenan/kaolin suspension was injected into the resulting cavity in rats of all groups except for control animals in group C, who were injected with 0.9% w/v NaCl. Rats in Ref1, Ref2, and Ref3 were administered each of the pure natural agents intraperitoneally at a total of 100 mg/kg of body weight, and the test groups respectively received intraperitoneal injections of the formulations given in . Control group C was administered normal saline intraperitoneally at 3 mL/kg of body weight, and the STD group received intraperitoneal diclofenac at 100 mg/kg of body weight. All treatments and control solutions were administered in one dose. At each planned timepoint (at each hour of the first six hours then at 8, 12, 24, 36, and 48 hours), for each animal we injected 5 mL of ice-cold saline solution (0.9%w/v NaCl) into each formed cavity, then collected a sample for counting leukocytes. evaluation of antioxidant activity Experimental design and animal exposures Rats (male albino Wistar, 200 ± 50 g) were kept in plastic cages (six rats/cage) at RT, with access to standard diet and water. Animals were randomly divided into groups as described for the anti-inflammatory experiments, with the following modification: group C (negative control) consisted of untreated rats receiving distilled water for 21 days, and group STD received 100 mg/kg of vitamin C for 21 days. All other groups received a dose of 100 mg/kg of the indicated formulations for each group for 21 days. All doses were administered intraperitoneally as a single dose after suspension in distilled water at 1 mL/100 g of body weight. At each time interval, we used diethyl ether to anesthetize animals intended for sampling. For obtaining blood samples, we created a retro-orbital puncture and collected the blood into heparinized tubes, which were centrifuged at 4 °C for 15 minutes at 15,000 rpm to separate plasma. The resulting plasma was stored at −20 °C for use in the reducing power and DPPH (2,2-diphenyl-1-picrylhydrazyl) assays (Merghem et al., ). Plasma antioxidant capacity using DPPH radical determination We followed Hasani et al. , with a few modifications, to evaluate the ability of the sampled plasma to scavenge DPPH radicals. In brief, a total of 50 μL of plasma were placed in 1250 μL of a solution of DPPH in methanol (2.4 mg/100 mL methanol), followed by incubation in the dark for 30 minutes. After centrifugation and spectrophotometric analysis, we calculated the plasma antioxidant capacity as follows: Radical scavenging activity = [ ( Ablank – Asample ) / Ablank ] × 100 (5) Plasma reducing assessed as ferric-reducing antioxidant power To determine the reducing power of the plasma samples, we followed Narayanaswamy & Balakrishnan to assay sample antioxidant abilities through formation of a colored complex with potassium ferricyanide (ferric-reducing antioxidant power, or FRAP). One milliliter of plasma was mixed with 0.5 mL each of potassium ferricyanide (1% w/v) and phosphate buffer (0.2 M, pH 6.6), followed by an incubation for 20 minutes at 50 °C. The reaction was terminated by addition of trichloroacetic acid (10% w/v), followed by a 10-minute centrifugation at 3000 rpm. Distilled water and 0.1 mL FeCl3 (0.1% w/v) were used to dilute 0.5 mL of the supernatant. Five minutes later, samples were then analyzed spectrophotometrically, with a higher absorbance indicating greater reducing power. Rats (male albino Wistar, 200 ± 50 g) were kept in plastic cages (six rats/cage) at RT, with access to standard diet and water. Animals were randomly divided into groups as described for the anti-inflammatory experiments, with the following modification: group C (negative control) consisted of untreated rats receiving distilled water for 21 days, and group STD received 100 mg/kg of vitamin C for 21 days. All other groups received a dose of 100 mg/kg of the indicated formulations for each group for 21 days. All doses were administered intraperitoneally as a single dose after suspension in distilled water at 1 mL/100 g of body weight. At each time interval, we used diethyl ether to anesthetize animals intended for sampling. For obtaining blood samples, we created a retro-orbital puncture and collected the blood into heparinized tubes, which were centrifuged at 4 °C for 15 minutes at 15,000 rpm to separate plasma. The resulting plasma was stored at −20 °C for use in the reducing power and DPPH (2,2-diphenyl-1-picrylhydrazyl) assays (Merghem et al., ). We followed Hasani et al. , with a few modifications, to evaluate the ability of the sampled plasma to scavenge DPPH radicals. In brief, a total of 50 μL of plasma were placed in 1250 μL of a solution of DPPH in methanol (2.4 mg/100 mL methanol), followed by incubation in the dark for 30 minutes. After centrifugation and spectrophotometric analysis, we calculated the plasma antioxidant capacity as follows: Radical scavenging activity = [ ( Ablank – Asample ) / Ablank ] × 100 (5) To determine the reducing power of the plasma samples, we followed Narayanaswamy & Balakrishnan to assay sample antioxidant abilities through formation of a colored complex with potassium ferricyanide (ferric-reducing antioxidant power, or FRAP). One milliliter of plasma was mixed with 0.5 mL each of potassium ferricyanide (1% w/v) and phosphate buffer (0.2 M, pH 6.6), followed by an incubation for 20 minutes at 50 °C. The reaction was terminated by addition of trichloroacetic acid (10% w/v), followed by a 10-minute centrifugation at 3000 rpm. Distilled water and 0.1 mL FeCl3 (0.1% w/v) were used to dilute 0.5 mL of the supernatant. Five minutes later, samples were then analyzed spectrophotometrically, with a higher absorbance indicating greater reducing power. We analyzed the data using SPSS (Chicago, IL) and give results as means (±standard deviation (SD)) in all tables and figures related to in vitro release. For drug-loading content and EE, the data were analyzed with one-way analysis of variance (ANOVA; p <.05 with least significant differences). We used Student’s t -test or, where more than two groups were compared, one-way ANOVA to compare differences between or among groups in the in vivo portions of the study (statistical significance set at p <.05). Post synthetic modification of ZrMOF and TiMOF materials shows the proposed interaction between ZrMOF and TiMOF materials and TS silane. It is suggested that ZrMOF (with the free carboxylic group) does not react with silane by covalent bonding, whereas TiMOF does react. TiMOF can react with TS silane through the free amino group, which interacts with TS, forming NH bonds. Consequently, silicone (Si) content was greater in TiMOF (2.64 ± 0.25%) than in ZrMOF (0.14 ± 0.01%) ( Table S1 ). SEM observations According to the FE-SEM images in , ZrMOF particles were aggregated with non-uniform structures of a spherical or oval shape. Sizes ranged from nanometers to micrometers. Further surface modification by silane TS groups in ZrMOFTS yielded no differences. Regarding TiMOF, these particles showed a dispersed and uniform structure and were mostly characterized by cubic and hexagonal shapes. We noted no changes for the morphological structures after silane TS group attachment. From a morphological structure perspective, TiMOF seemed to be a more promising drug carrier than ZrMOF. XRD characterization shows that all ZrMOF materials exhibited sharp reflection peaks, appearing at low angles (2 θ = 7.5 and 25.11). The acquisition of these peaks indicates successful preparation of ZrMOF (Yang et al., ; Feng et al., ; Hassabo et al., ). After the surface modification with TS silane groups, we observed no new peaks in the ZrMOFTS pattern. In the nanoformulation patterns, several new diffraction peaks were detected at 6.7, 10.5, 13.2, 14–27, 32.8, and 40.4°, and other small peaks were seen in all nanoformulations (ZrMOFTS-Pip, ZrMOFTS-Pip, and ZrMOFTS-Ru-Pip, corresponding to free Ru or Pip or Ru + Pip). Concerning the TiMOFs , their pattern was characterized by several sharp reflection peaks from low to medium angles (2 θ = 6.8° to 35°), indicating the successful synthesis of titanium-based MOF. No peaks were observed as a result of the surface modification with TS groups. For nanoformulations, some new diffraction peaks were observed at 9.2°, 10.5°, 13.2°, and 33.7° in all TiMOFTS-Ru, TiMOFTS-Pip, and TiMOFTS-Ru-Pip. In addition, several extensive peaks appeared at the same positions because of overlapping peaks of the drugs and TiMOF. These peaks indicate the presence of Ru or Pip. As indicated by XRD results for nanoformulations, the Ru and Pip mostly loaded into the MOFs, and small fractions of the drug molecules could be found on the surface in the crystalline phase. This observation confirms that loading of the natural agents into the nanoformulations either singly or combined was successful, in line with previous reports of MOFs loaded with various drugs (Rezaei et al., ; Pham et al., ). FTIR characterization As shown in , several peaks could be seen between 400 and 1750 cm −1 , confirming the similar surface compositions for ZrMO and TiMO (Vilela et al., ; Sarker & Jhung, ; Li et al., ). Moreover, spectra obtained for pure prodrugs Pip and Ru presents main IR bands in the same spectral range 400–1750 cm −1 . Therefore, the comparison of the spectra obtained for the samples before and after modification is difficult. However as shown in , in the ZrMOFTS spectrum, several bands (654, 1120, and 1705 cm −1 ) are slightly more intense compared to the bands for ZrMOF. The peaks at 654 cm −1 and 1120 cm −1 especially may reflect vibrations from stretching of the silane TS groups’ Si–O–Si and Si–O bonds (Mahdavi et al., ). Other highlighted peaks suggested the presence of ethoxy groups in modified materials (Kim et al., ). Taken together, these results point to the successful surface functionalization of TS silane groups into/onto MOFs. For two types of nanoformulations, new band corresponding to Ru and Pip were detected in 1130 cm −1 . For samples with Pip very weak peak related to medicine was detected at 2940 cm −1 . In addition, peaks overlapped with those related to ZrMOs at 653, 810, 1260, 1367, and 1506 cm −1 are present. As shown in , in the TiMOFTS spectrum, peaks at 400–650 cm −1 were shifted, whereas bands at 770, 1160, 1540, and 1625 cm −1 had higher intensities compared to TiMOF. This suggests that the TS silane groups were attached in TiMOFs. For nanoformulations, new bands were seen in the 850–1190 cm −1 spectral range, pertaining to Ru or Pip or their combination. Also, increased intensities were detected at 440, 515, 773, 1388, and 1540 cm −1 for nanoformulations compared to TiMOF and TiMOFTS. It suggests presence of Ru and/or Pip in the nanoformulations. The FTIR results indicate successful incorporation of Ru and/or Pip into the materials. These results are consistent with previous reports describing other drug loading into MOFs (Rezaei et al., ; Chen et al., ; Liu et al., ). As indicated by the collective results from FTIR and XRD, Ru and/or Pip were mainly loaded into the MOFs, with some fraction of molecules remaining on the surface in a crystalline state. Thermal analysis STA characterization and show the results of the thermal analysis of the materials prepared at all stages. Thermogravimetry data indicate that in the experimental temperature range, the weight loss varied according to type of MOF material and reached about 68 wt.% and 75 wt.% for ZrMOF and TiMOF, respectively . These results are consistent with data concerning mass loss obtained for MOF materials, including Zr-MOF (Santos et al., ). After surface modification with TS silane groups, there was a gain noted in both materials ZrMOFTS and TiMOFTS. This change could be attributable to the different extent of silane modification, affecting the Si oxidation and/or changes in thermal stability of the silane groups (Sarker & Jhung, ). This behavior was in accordance with previous work (Li et al., ). The DTG patterns of the modified MOFs were characterized by three stages of mass loss associated with adsorbed water removal (centered at ∼90 °C), decomposition of the organic content (centered at ∼220 °C), and destruction of MOF structure (centered at ∼580 °C for ZrMOF and 420 °C for TiMOF) (Sarker et al., ). Apart from modified materials, in nanoformulations, there was an increment in weight loss observed, verifying the success of loading to both MOFs. As expected, pure Ru and Pip were totally decomposed (almost 100 wt.%). All DTG curves for nanoformulations showed intensification compared to DTG curves of modified MOFs, as a result of the higher weight loss . There were two stages of mass change during the heating to 800 °C. The first stage resulted in peaks shifted at ∼230 °C and ∼320 °C, corresponding to the main peaks detected for free Ru and Pip at 264 °C and 341 °C, respectively. The second stage showed peaks shifted from center at ∼540–550 °C for Ru and Pip, respectively. This shift is connected to the decomposition/volatilization of both natural agents used. Of note, the shifted peaks in the nanoformulations appeared to correspond to those for free Ru and Pip, confirming the successful loading process for either single or double drug loading (Cunha et al., , ; Sarker & Jhung, ; Sarker et al., ). DSC characterization of materials The DSC patterns of all materials during the experiments indicated that the exothermic process correlated with mass loss. However for Pip and Ru, an endothermic signal was detected below 200 °C, probably corresponding to melting process. Prior to surface modification, a sharp exothermic peak centered at ∼570 °C was detected, a feature unique to ZrMOFs. After the surface modification, we observed the same peak at a lower intensity. Through preparation of the nanoformulations, the DCS curves of Zr-MOFTS-Ru, ZrMOFTS-Pip, and ZrMOFTS-Ru-Pip showed new exothermic peaks at 473–524 °C, corresponding to free Ru and Pip. The free Ru and Pip presented broad exothermic peaks centered at ∼525 °C, arising from their decomposition. These peaks confirmed the presence of natural agents in the nanoformulations. Concerning the TiMOF material, two broad peaks characteristics for TiMOF were detected at 355 °C and 426 °C. These peaks were shifted and had a little higher intensities compared to pristine TiMOF, indicating the attachment of silane groups. The nanoformulations resulted in new sharp peaks centered at about 325 °C, which could be shifted from the original peaks for the natural agents. Other peaks appeared at the same positions or only slightly shifted from free Ru and Pip. These peaks indicate the successful loading of pro-drugs into the nanoformulations. As can be seen, the DSC changes for all of the nanoformulations correlate with the DTG data. Measurements of zeta potential Zeta potential is crucial for estimating the surface charge of nanoparticles to understand their stability in suspension. All pristine MOFs, TS silane-modified MOFs, and nanoformulations were measured based on their suspension in deionized water. We also measured free Ru and Pip for comparison. As shown in , all materials displayed negative zeta potential values of around −37 to −55 mV. Among the ZrMOFs, the lowest value was obtained for ZrMOF (–37.11± −1.8), whereas the highest values were recorded for the ZrMOFTS-Pip nanoformulation (–49.01± −2.94). Similarly, TiMOF and TiMOFTS had the lowest negative zeta values (–37.56± −0.75 and −36.51± −0.79, respectively), whereas the highest value was detected for the TiMOFTS-Ru-Pip nanoformulation (–55.53± −0.95). Additionally, free Pip and Ru had similar negative zeta values of −47.35± −6.58 and −46.36± −1.28, respectively. For ZrMOF, the surface modification altered the zeta potential from −37.1 (ZrMOF) to −43.21 (ZrMOFTS), in good agreement with previous results for MOFs (Hidalgo et al., ; Li et al., ). These findings may indicate that all of these materials are electrically stable when suspended in water. One plausible reason is the high negative/positive zeta potential values that generate repulsion between adjacent particles in solution, resulting in good stability and limiting aggregation (Frank et al., ). Generally, sufficient repulsive force is indicated by a zeta potential value ranging from >–30 mV to +30 mV, leading to better physical stability (Joseph & Singhvi, ). In this context, an emulsion with zeta potential values ranging from −41 to −50 mV indicates good stability (Losso et al., ). Accordingly, our prepared system, especially nanoformulations with Ru or Pip, could be more stable than others. Particle size measurement shows the mean particle size of nanoformulations by means of DLS measurements. The results indicate that the Zr-based nanoformulations had larger particle sizes when compared to Ti-based nanoformulations. In addition, the dual loading affected the particle size, with increases detected for nanoformulations consisting of both Ru and Pip compared to single loading. The same effect was obtained for the mean PDI. The ZrMOFTS-Ru-Pip nanoformulation had the highest PDI, almost within the micro range, mainly because of the high-molecular-weight zirconium as the inorganic moiety, the high-molecular-weight carboxyl branching, and the involvement of both Ru and Pip in the same formulation. Furthermore, the PDI of all formulations came within a range that should assure their stability. Of note, the results of PDI were in agreement with those for zeta potential, which reflected exceptionally stable formulations. Drug-loading properties In the present study, Ru and Pip independently or combined loaded to Zr-based or Ti-based MOFs. All formulations were subjected to the same preparation method, using the same weight ratios (drug:MOFs) among the preparation components. As shows, total loading capacity (TLC) did not differ significantly for single versus dual loading of Ru and/or Pip into the nanoformulations ( p ˂ .05). We also found no significant difference in EE with single loading of Ru or Pip but did find differences for EE when both Ru and Pip were loaded together in nanoformulations. For TiMOF-based nanoformulations, the results showed a significant effect on TLC, but no significant differences in EE between nanoformulations. As can be seen, TI-MOF-based nanoformulations significantly increased TLC for Ru and/or Pip compared to ZrMOF-based nanoformulations. Additionally, the EE for Ru or Pip significantly increased with Ti-MOF compared with Zr-MOF nanoformulations when used in single loading. In contrast, only Zr-MOF nanoformulations significantly increased EE of Ru and Pip loaded in combination compared to TiMOF material. The TiMOFTS-Pip nanoformulation had the maximum TLC for Pip (17.11 ± 1.43%), and the TiMOFTS-Ru nanoformulation had the maximum for Ru (15.56 ± 1.24%). The obtained TLCs for Ru and Pip are in line with previous reports for drugs loaded to various MOFs, such as DOX (∼16 wt.%; Bi et al., ) and gentamicin (19 wt.%; Soltani et al., ). In general, both TLC and EE were significantly affected by type of MOF material. Metal organic frameworks are excellent drug carrier due to the synergy of the effect of pores inside the framework and interaction with functional groups like amine and carboxylate groups. In the studied case, Ru and Pip can be loaded onto/into MOFs materials taking advantage of: (i) hydrogen bonding with free amino and carboxylate group, (ii) chemical bonding with free metal ion center (silicon) leading to form Si–O bond, and (iii) physical adsorption into pores of the framework via pi–pi staking . In vitro release kinetics The release from non-modified MOFs (ZrMOF-Ru, ZrMOF-Pip, TiMOF-Ru, and TiMOF-Pip) at pH 7.4 , resulted in fast release profiles, taking place within 24 hours. The release kinetics in Ti-based formulations showed a significant difference in mean release efficiency (MRE) value compared to their Zr analogues. The results suggest that the metal component of the nano carrier system might be the limiting factor in controlling the release profile of both Ru and Pip in nanoformulations. displays the cumulative release of Ru or Pip from nanoformulations as a function of time from nanoformulations designed using silane-modified MOFs. As shown in , at 48 hours, Ru or Pip nanoformulations had a cumulative release of >90%. For dual-delivery nanoformulations (ZrMOFTS-Ru-Pip and TiMOFTS-Ru-Pip), we calculated the release of each natural agent. Their release profiles (dotted lines) indicated that within 48 hours, ∼62% and ∼56% as Ru and Pip, respectively, released from the ZrMOFTS-Ru-Pip formulation and ∼71% and ∼65%, respectively, released from the TiMOFTS-Ru-Pip formulation. It is seen also that the release profiles are as follows. Zr-MOFTS-Pip showed a fast release, probably due to lack of chemical connection between the framework and drug structure. In contrast, Ti-MOFTS-Pip showed a slow release, probably due to chemical bonding between silicone metal center and drug. This effect can be used to control of the release behavior of drugs and for constructing novel drug carriers. Another feature of release from both MOF materials was that all release profiles could be described by two stages: a zero-order release effect as the first stage, within 12 hours, and a stable sustained release as the second stage, from 12 hours to the end of the experiment. As such, mixed release patterns are likely the result of having no burst in the first stage, with a slight increase in Ru or Pip release at the second stage. The observation of a two-stage pattern is in agreement with previous reports, including release of ibuprofen from various MOFs (Silva et al., ; Rojas et al., ; Sarker et al., ; Pham et al., ), 5-fluorouracil from Mg-MOFs (Hu et al., ), caffeine from ZrMOFs (Sarker & Jhung, ), and DOX from a zeolitic imidazolate MOF (Bi et al., ). Next, we fitted the release profiles of Ru and Pip obtained from both types of MOFs to the following kinetic models: zero-order, first-order, Hixson–Crowell, Korsmeyer–Peppas, and Higuchi. With linear regression modeling only, the results indicated that Ru and Pip were released from nanoformulations according to zero-order kinetics ( R 2 =0.98–0.99). On the other hand, investigation of the linear and non-linear regressions together showed that Korsmeyer–Peppas had the best fit ( R 2 =0.99–1.00) . Thus, the in vitro release of both Ru and Pip followed the same kinetics regardless of metal composition (Zr or Ti) within the nanocarrier MOFs. These results are consonant with those of earlier studies of the in vitro kinetics of various drug material release from different MOF structures (Li et al., ; Santos et al., ). Zero-order kinetics describes the release kinetics for drug diffusion from reservoir-based systems including MOFs, based on Fickian diffusion (Horcajada et al., ; Peppas & Narasimhan, ; Pham et al., ). Consequently, the zero-release up to about 12 hours demonstrates that MOF structures can efficiently control drug release without a premature lag or burst. Generally, the Korsmeyer–Peppas model is used to describe the surface degradation/erosion of a formulation containing the drug (Costa & Sousa Lobo, ; Rothstein et al., ). With surface erosion, degradation is restricted to the outermost surface of the porous system without affecting the interior (Pham et al., ). Comparing the release of Ru and Pip from both MOFs showed no significant differences for the mean cumulative release (MCR) release parameter. Thus, the MOFs did not affect the maximum released amount of these agents regardless of type. Concerning the MRE kinetic parameter, Ru and Pip from the Zr-MOF nanoformulations differed significantly but did not differ significantly in the Ti-MOF nanoformulations. Significant differences were observed in the other two kinetic parameters: mean release rate (MRR) and mean release time (MRT), the mean time required for maximum release of a drug or medical agent from its carrier system or dosage form. MRT is a kinetic parameter that is a function of either MRR or MRE or both. As MRT increases, both MRE and MRR are expected to decrease, unless influenced by other external factors. A decrease in MRT reflects a highly efficient system that allows easy release of a drug into the medium and a high degree of solubility, as indicated by a high MRR value. Furthermore, for loading of the Ru and Pip in combination, results showed that Ru kinetically exceeded Pip, given the highly significant difference between them for MCRP, MRR, and MRT. The amount of natural agent that is loaded directly affects the relation between MRT on one side and MRR and MRE on the other. Overall, Ti-based nanoformulations proved to be more efficient as nanocarrier systems for Ru and Pip singly or together compared to the Zr-based nanoformulations. A review of the literature shows that the release kinetics of Ru can vary depending on the drug carrier used in the various nanosystems but that it mostly releases with mixed kinetic mechanisms, similar to our results. In this way, the release pattern of Ru from solid lipid nanoparticles is a good fit to the first-order and Korsmeyer–Peppas models (Pandian et al., ), in keeping with results showing Eudragit nanosphere release with Korsmeyer–Peppas and phase II kinetics (Asfour & Mohsen, ), and mesoporous silica nanoparticle release through Higuchi and first-order kinetics (Karnopp et al., ). Concerning the co-delivery strategy, the release of Ru or Pip with other drugs also can occur via combined kinetic mechanisms. The release of Pip and DOX follows the kinetics of Korsmeyer–Peppas and n -value, suggesting a mechanism of Fickian’s diffusion from lecithin-chitosan nanoparticles (Alkholief, ). The release kinetics for Ru and curcumin co-delivery, however, showed a non-Fickian transport model from chitosan nanoparticles (Ramaswamy et al., ). It is seen that modification with silane of MOFs surface plays a crucial role for in vitro release kinetics. This becomes clear when the release profile of silane-free formulations is compared to silane-modified nanoformulations. The silane modification in MOFs leads to a long sustained release effect (within 48 hours) compared to fast/burst release effect (24 hours) for MPFs without silane modification. All surface-modified MOF carriers were shown to be convenient DDS for release of potent drugs, in small or extended doses. They versatility of their design permits tailoring of adequate DDSs, optimized to the desired scope of the suggested treatment plan, as per patient’s medical requirements (Ganesh et al., ). Preference is for silane-based MOF rather than the silane free MOF structures. In vivo studies The antioxidant and anti-inflammatory activities of Ru and Pip have previously been described (Selvendiran et al., ; Bang et al., ; Lee et al., ; Mahmoud, ; Ramaswamy et al., ; Enogieru et al., ). Here, we evaluated whether these properties can be enhanced using a nanoformulation system. Recent evidence indicates that nanoformulations could be an alternative way to improve the pharmacological effects of natural agents (Yavarpour-Bali et al., ). Additionally, the pharmacological effects may offer medical value against many neurodegenerative diseases (Khan et al., ), including Alzheimer’s, oxidative stress, Parkinson’s, and Huntington’s, possibly because of a shared underlying mechanism of neuronal loss, inflammation, and oxidative stress (Enogieru et al., ). Targeted therapies for these conditions are crucially needed because of the progressive neuronal loss and related impairments in cognition and memory (Aarsland et al., ; Magalingam et al., ). Evaluation the nanoformulations for anti-inflammatory effects Amelioration of induced paw edema Paw edema induced in rats by carrageenan is a typical phlogistic agent for systemic evaluation of anti-inflammatory activity, and it is still used because of its non-antigenic nature and absence of noticeable adverse reactions (Eze et al., ). However, kaolin, because of its clay nature, may be preferential over carrageenan because it does not lead to antigenicity or hypersensitivity reactions. Therefore, the carrageenan and kaolin-induced edema have been combined into one model and widely used (Pashmforosh et al., ; Sur et al., ). We followed this model here. As shown in , the results demonstrated that nanoformulations containing Ru or Pip as single loading caused a significant inhibition in paw edema compared to nanoformulations containing combined loading of Ru and Pip together (G4 and G8). There was a greater percent inhibition with injection of G4* and G8* as a mixture of nanoformulations containing Ru and Pip, respectively. One explanation for this observation is the better anti-inflammatory action obtained for nanoformulations containing one natural agent rather than two natural agents, or a synergistic effect may have occurred. The results generally are in agreement with the in vitro release findings showing that single-agent nanoformulations released more agent than dual-agent nanoformulations. We note that rat paw edema reached a maximum inflammatory volume at three hours after the stimulant injection, which would explain the initially low inhibition percentage of the administered treatment in all groups. The significantly lower inhibition ability of the free Ru and Pip (Ref1 and Ref2) compared to the STD group is mainly attributed to the enhanced solubility of the standard drug. Briefly, these data showed significant inhibition exerted by nanoformulations compared to standard diclofenac drug or free Ru and/or Pip. The findings are in accordance with our previous data showing a notable anti-inflammatory influence in an induced versus free-form rat model for nanoformulation-based mesoporous silica nanoparticles for the flavonoid quercetin and shikimic acid (AbouAitah et al., ). Other studies also have reported this effect in animal tests (Xu et al., ; Rachmawati et al., ; de Almeida et al., ), showing the importance of nanoformulation delivery compared to the traditional methods of delivering anti-inflammatory drugs. Leukocyte migration Inflammation induced by carrageenan/kaolin occurs in two phases. The first phase involves histamine, kinin, and serotonin release, and the second phase involves prostaglandin, protease, and lysosome enzyme release. The first phase proceeds during the first hour after stimulus injection, and the second phase carries over into hours 3 and 4 (Mondal et al., ). As shown in , nanoformulations significantly reduced the paw volume as compared to the free natural agents, STD, and or control groups. The results also were confirmed through leukocyte count, in which a decrease indicates the bio-efficiency of the injected substance/material through an anti-inflammatory effect. The maximum number of leukocytes migrating to the air pouch after stimulus injection was about 5.3 × 10 5 cells/mL and was found in control group C. This count was significantly higher than in any other group. The leukocyte count for the STD group was about 2.4 × 10 5 cells/mL, and the Ref1, Ref2, and Ref3 groups, treated respectively with free Ru, Pip, and their mixture, had significantly lower leukocyte count (respectively 3.4 × 10 5 cells/mL, 3.1 × 10 5 cells/mL, and 2.6 × 10 5 cells/mL) compared to the control and STD groups. Of the reference groups, Ref3 (Ru + Pip) had the lowest value, implying greater efficiency, possibly because of the bioenhancing nature of Pip in the drug combination. As noted in the rat paw edema experiment, diclofenac was rapidly eliminated compared to the extracts, whose effect lasted until the end of the experiment. All nanoformulation groups showed a significant reduction compared to other treatments, in agreement with previous reports (de Almeida et al., ). Additionally, groups G4* and G8* showed the best results, with the lowest leukocyte counts at 1.4 × 10 5 cells/mL and 1.1 × 10 5 cells/mL, respectively. A possible mechanism of action could be changed in leukocyte migration into tissues and the target organ, in addition to a putative anti-prostaglandin and antioxidant effect of both Ru and Pip. Evaluation of antioxidant effects in rats Among the most commonly used biomarkers in the assessment of antioxidant effectiveness is plasma antioxidant capacity. The idea is based on the network of a large number of endogenous antioxidants in plasma. These antioxidants can show complementary or synergistic behavior, providing efficient protection against reactive oxygen species. Among the methods available for evaluating antioxidant activity are FRAP, trolox equivalent antioxidant capacity, total radical absorption potential, and the radical scavenging activity of DPPH. In the present work, we used two complementary tests: DPPH scavenging activity and FRAP. Effect on plasma antioxidant capacity using DPPH radical and FRAP reducing power Oral administration of Ru and/or Pip either as free or nanoformulations led to a general enhancement of the plasma antioxidant capacity . Compared with results in the control group, this increase (15.25 ± 1.46%) was statistically significant, represented as the basal line in the figure. Also, loading of single Ru or Pip was associated with higher antioxidant effects than when the combination was loaded. Administration of G4* (40.20 ± 1.02) and G8* (45.10 ± 2.08) resulted in the highest significant antioxidant effect, compared to all other nanoformulations and standard drug. Thus, administering the two independent single-drug nanoformulations together by mixing them after preparation had a more significant effect than their administration in a co-delivery nanoformulation. The likely explanation is competition between Ru and Pip within the same MOF, as well as their shared antioxidant properties. Similarly, the plasma reducing power based on FRAP analysis indicated that G4* and G8* had the highest and most significant antioxidant effect compared to other groups and control. We can draw the following conclusions from the antioxidant and anti-inflammatory effects detected here. First, the use of MOF carriers improves these effects compared to free natural agents and standard drugs. These results are in accord with other reports on nanodelivery systems versus free natural agents (de Almeida et al., ). Second, a mixture of single-loaded Ru and Pip enhances the antioxidant and anti-inflammatory effects compared with dual-loading versions. Third, the anti-inflammatory and antioxidant effects of Ru and Pip appear to be quite similar. Finally, the TiMOF nanoformulations are more likely candidates for therapeutic delivery than Zr-MOF nanoformulations because of better enhancing of both effects. For these reasons, we suggest that TiMOFs may be promising for developing DDSs for natural agents. Future investigations are needed to target specific neurodegenerative disease models. shows the proposed interaction between ZrMOF and TiMOF materials and TS silane. It is suggested that ZrMOF (with the free carboxylic group) does not react with silane by covalent bonding, whereas TiMOF does react. TiMOF can react with TS silane through the free amino group, which interacts with TS, forming NH bonds. Consequently, silicone (Si) content was greater in TiMOF (2.64 ± 0.25%) than in ZrMOF (0.14 ± 0.01%) ( Table S1 ). According to the FE-SEM images in , ZrMOF particles were aggregated with non-uniform structures of a spherical or oval shape. Sizes ranged from nanometers to micrometers. Further surface modification by silane TS groups in ZrMOFTS yielded no differences. Regarding TiMOF, these particles showed a dispersed and uniform structure and were mostly characterized by cubic and hexagonal shapes. We noted no changes for the morphological structures after silane TS group attachment. From a morphological structure perspective, TiMOF seemed to be a more promising drug carrier than ZrMOF. shows that all ZrMOF materials exhibited sharp reflection peaks, appearing at low angles (2 θ = 7.5 and 25.11). The acquisition of these peaks indicates successful preparation of ZrMOF (Yang et al., ; Feng et al., ; Hassabo et al., ). After the surface modification with TS silane groups, we observed no new peaks in the ZrMOFTS pattern. In the nanoformulation patterns, several new diffraction peaks were detected at 6.7, 10.5, 13.2, 14–27, 32.8, and 40.4°, and other small peaks were seen in all nanoformulations (ZrMOFTS-Pip, ZrMOFTS-Pip, and ZrMOFTS-Ru-Pip, corresponding to free Ru or Pip or Ru + Pip). Concerning the TiMOFs , their pattern was characterized by several sharp reflection peaks from low to medium angles (2 θ = 6.8° to 35°), indicating the successful synthesis of titanium-based MOF. No peaks were observed as a result of the surface modification with TS groups. For nanoformulations, some new diffraction peaks were observed at 9.2°, 10.5°, 13.2°, and 33.7° in all TiMOFTS-Ru, TiMOFTS-Pip, and TiMOFTS-Ru-Pip. In addition, several extensive peaks appeared at the same positions because of overlapping peaks of the drugs and TiMOF. These peaks indicate the presence of Ru or Pip. As indicated by XRD results for nanoformulations, the Ru and Pip mostly loaded into the MOFs, and small fractions of the drug molecules could be found on the surface in the crystalline phase. This observation confirms that loading of the natural agents into the nanoformulations either singly or combined was successful, in line with previous reports of MOFs loaded with various drugs (Rezaei et al., ; Pham et al., ). As shown in , several peaks could be seen between 400 and 1750 cm −1 , confirming the similar surface compositions for ZrMO and TiMO (Vilela et al., ; Sarker & Jhung, ; Li et al., ). Moreover, spectra obtained for pure prodrugs Pip and Ru presents main IR bands in the same spectral range 400–1750 cm −1 . Therefore, the comparison of the spectra obtained for the samples before and after modification is difficult. However as shown in , in the ZrMOFTS spectrum, several bands (654, 1120, and 1705 cm −1 ) are slightly more intense compared to the bands for ZrMOF. The peaks at 654 cm −1 and 1120 cm −1 especially may reflect vibrations from stretching of the silane TS groups’ Si–O–Si and Si–O bonds (Mahdavi et al., ). Other highlighted peaks suggested the presence of ethoxy groups in modified materials (Kim et al., ). Taken together, these results point to the successful surface functionalization of TS silane groups into/onto MOFs. For two types of nanoformulations, new band corresponding to Ru and Pip were detected in 1130 cm −1 . For samples with Pip very weak peak related to medicine was detected at 2940 cm −1 . In addition, peaks overlapped with those related to ZrMOs at 653, 810, 1260, 1367, and 1506 cm −1 are present. As shown in , in the TiMOFTS spectrum, peaks at 400–650 cm −1 were shifted, whereas bands at 770, 1160, 1540, and 1625 cm −1 had higher intensities compared to TiMOF. This suggests that the TS silane groups were attached in TiMOFs. For nanoformulations, new bands were seen in the 850–1190 cm −1 spectral range, pertaining to Ru or Pip or their combination. Also, increased intensities were detected at 440, 515, 773, 1388, and 1540 cm −1 for nanoformulations compared to TiMOF and TiMOFTS. It suggests presence of Ru and/or Pip in the nanoformulations. The FTIR results indicate successful incorporation of Ru and/or Pip into the materials. These results are consistent with previous reports describing other drug loading into MOFs (Rezaei et al., ; Chen et al., ; Liu et al., ). As indicated by the collective results from FTIR and XRD, Ru and/or Pip were mainly loaded into the MOFs, with some fraction of molecules remaining on the surface in a crystalline state. STA characterization and show the results of the thermal analysis of the materials prepared at all stages. Thermogravimetry data indicate that in the experimental temperature range, the weight loss varied according to type of MOF material and reached about 68 wt.% and 75 wt.% for ZrMOF and TiMOF, respectively . These results are consistent with data concerning mass loss obtained for MOF materials, including Zr-MOF (Santos et al., ). After surface modification with TS silane groups, there was a gain noted in both materials ZrMOFTS and TiMOFTS. This change could be attributable to the different extent of silane modification, affecting the Si oxidation and/or changes in thermal stability of the silane groups (Sarker & Jhung, ). This behavior was in accordance with previous work (Li et al., ). The DTG patterns of the modified MOFs were characterized by three stages of mass loss associated with adsorbed water removal (centered at ∼90 °C), decomposition of the organic content (centered at ∼220 °C), and destruction of MOF structure (centered at ∼580 °C for ZrMOF and 420 °C for TiMOF) (Sarker et al., ). Apart from modified materials, in nanoformulations, there was an increment in weight loss observed, verifying the success of loading to both MOFs. As expected, pure Ru and Pip were totally decomposed (almost 100 wt.%). All DTG curves for nanoformulations showed intensification compared to DTG curves of modified MOFs, as a result of the higher weight loss . There were two stages of mass change during the heating to 800 °C. The first stage resulted in peaks shifted at ∼230 °C and ∼320 °C, corresponding to the main peaks detected for free Ru and Pip at 264 °C and 341 °C, respectively. The second stage showed peaks shifted from center at ∼540–550 °C for Ru and Pip, respectively. This shift is connected to the decomposition/volatilization of both natural agents used. Of note, the shifted peaks in the nanoformulations appeared to correspond to those for free Ru and Pip, confirming the successful loading process for either single or double drug loading (Cunha et al., , ; Sarker & Jhung, ; Sarker et al., ). DSC characterization of materials The DSC patterns of all materials during the experiments indicated that the exothermic process correlated with mass loss. However for Pip and Ru, an endothermic signal was detected below 200 °C, probably corresponding to melting process. Prior to surface modification, a sharp exothermic peak centered at ∼570 °C was detected, a feature unique to ZrMOFs. After the surface modification, we observed the same peak at a lower intensity. Through preparation of the nanoformulations, the DCS curves of Zr-MOFTS-Ru, ZrMOFTS-Pip, and ZrMOFTS-Ru-Pip showed new exothermic peaks at 473–524 °C, corresponding to free Ru and Pip. The free Ru and Pip presented broad exothermic peaks centered at ∼525 °C, arising from their decomposition. These peaks confirmed the presence of natural agents in the nanoformulations. Concerning the TiMOF material, two broad peaks characteristics for TiMOF were detected at 355 °C and 426 °C. These peaks were shifted and had a little higher intensities compared to pristine TiMOF, indicating the attachment of silane groups. The nanoformulations resulted in new sharp peaks centered at about 325 °C, which could be shifted from the original peaks for the natural agents. Other peaks appeared at the same positions or only slightly shifted from free Ru and Pip. These peaks indicate the successful loading of pro-drugs into the nanoformulations. As can be seen, the DSC changes for all of the nanoformulations correlate with the DTG data. and show the results of the thermal analysis of the materials prepared at all stages. Thermogravimetry data indicate that in the experimental temperature range, the weight loss varied according to type of MOF material and reached about 68 wt.% and 75 wt.% for ZrMOF and TiMOF, respectively . These results are consistent with data concerning mass loss obtained for MOF materials, including Zr-MOF (Santos et al., ). After surface modification with TS silane groups, there was a gain noted in both materials ZrMOFTS and TiMOFTS. This change could be attributable to the different extent of silane modification, affecting the Si oxidation and/or changes in thermal stability of the silane groups (Sarker & Jhung, ). This behavior was in accordance with previous work (Li et al., ). The DTG patterns of the modified MOFs were characterized by three stages of mass loss associated with adsorbed water removal (centered at ∼90 °C), decomposition of the organic content (centered at ∼220 °C), and destruction of MOF structure (centered at ∼580 °C for ZrMOF and 420 °C for TiMOF) (Sarker et al., ). Apart from modified materials, in nanoformulations, there was an increment in weight loss observed, verifying the success of loading to both MOFs. As expected, pure Ru and Pip were totally decomposed (almost 100 wt.%). All DTG curves for nanoformulations showed intensification compared to DTG curves of modified MOFs, as a result of the higher weight loss . There were two stages of mass change during the heating to 800 °C. The first stage resulted in peaks shifted at ∼230 °C and ∼320 °C, corresponding to the main peaks detected for free Ru and Pip at 264 °C and 341 °C, respectively. The second stage showed peaks shifted from center at ∼540–550 °C for Ru and Pip, respectively. This shift is connected to the decomposition/volatilization of both natural agents used. Of note, the shifted peaks in the nanoformulations appeared to correspond to those for free Ru and Pip, confirming the successful loading process for either single or double drug loading (Cunha et al., , ; Sarker & Jhung, ; Sarker et al., ). The DSC patterns of all materials during the experiments indicated that the exothermic process correlated with mass loss. However for Pip and Ru, an endothermic signal was detected below 200 °C, probably corresponding to melting process. Prior to surface modification, a sharp exothermic peak centered at ∼570 °C was detected, a feature unique to ZrMOFs. After the surface modification, we observed the same peak at a lower intensity. Through preparation of the nanoformulations, the DCS curves of Zr-MOFTS-Ru, ZrMOFTS-Pip, and ZrMOFTS-Ru-Pip showed new exothermic peaks at 473–524 °C, corresponding to free Ru and Pip. The free Ru and Pip presented broad exothermic peaks centered at ∼525 °C, arising from their decomposition. These peaks confirmed the presence of natural agents in the nanoformulations. Concerning the TiMOF material, two broad peaks characteristics for TiMOF were detected at 355 °C and 426 °C. These peaks were shifted and had a little higher intensities compared to pristine TiMOF, indicating the attachment of silane groups. The nanoformulations resulted in new sharp peaks centered at about 325 °C, which could be shifted from the original peaks for the natural agents. Other peaks appeared at the same positions or only slightly shifted from free Ru and Pip. These peaks indicate the successful loading of pro-drugs into the nanoformulations. As can be seen, the DSC changes for all of the nanoformulations correlate with the DTG data. Zeta potential is crucial for estimating the surface charge of nanoparticles to understand their stability in suspension. All pristine MOFs, TS silane-modified MOFs, and nanoformulations were measured based on their suspension in deionized water. We also measured free Ru and Pip for comparison. As shown in , all materials displayed negative zeta potential values of around −37 to −55 mV. Among the ZrMOFs, the lowest value was obtained for ZrMOF (–37.11± −1.8), whereas the highest values were recorded for the ZrMOFTS-Pip nanoformulation (–49.01± −2.94). Similarly, TiMOF and TiMOFTS had the lowest negative zeta values (–37.56± −0.75 and −36.51± −0.79, respectively), whereas the highest value was detected for the TiMOFTS-Ru-Pip nanoformulation (–55.53± −0.95). Additionally, free Pip and Ru had similar negative zeta values of −47.35± −6.58 and −46.36± −1.28, respectively. For ZrMOF, the surface modification altered the zeta potential from −37.1 (ZrMOF) to −43.21 (ZrMOFTS), in good agreement with previous results for MOFs (Hidalgo et al., ; Li et al., ). These findings may indicate that all of these materials are electrically stable when suspended in water. One plausible reason is the high negative/positive zeta potential values that generate repulsion between adjacent particles in solution, resulting in good stability and limiting aggregation (Frank et al., ). Generally, sufficient repulsive force is indicated by a zeta potential value ranging from >–30 mV to +30 mV, leading to better physical stability (Joseph & Singhvi, ). In this context, an emulsion with zeta potential values ranging from −41 to −50 mV indicates good stability (Losso et al., ). Accordingly, our prepared system, especially nanoformulations with Ru or Pip, could be more stable than others. shows the mean particle size of nanoformulations by means of DLS measurements. The results indicate that the Zr-based nanoformulations had larger particle sizes when compared to Ti-based nanoformulations. In addition, the dual loading affected the particle size, with increases detected for nanoformulations consisting of both Ru and Pip compared to single loading. The same effect was obtained for the mean PDI. The ZrMOFTS-Ru-Pip nanoformulation had the highest PDI, almost within the micro range, mainly because of the high-molecular-weight zirconium as the inorganic moiety, the high-molecular-weight carboxyl branching, and the involvement of both Ru and Pip in the same formulation. Furthermore, the PDI of all formulations came within a range that should assure their stability. Of note, the results of PDI were in agreement with those for zeta potential, which reflected exceptionally stable formulations. In the present study, Ru and Pip independently or combined loaded to Zr-based or Ti-based MOFs. All formulations were subjected to the same preparation method, using the same weight ratios (drug:MOFs) among the preparation components. As shows, total loading capacity (TLC) did not differ significantly for single versus dual loading of Ru and/or Pip into the nanoformulations ( p ˂ .05). We also found no significant difference in EE with single loading of Ru or Pip but did find differences for EE when both Ru and Pip were loaded together in nanoformulations. For TiMOF-based nanoformulations, the results showed a significant effect on TLC, but no significant differences in EE between nanoformulations. As can be seen, TI-MOF-based nanoformulations significantly increased TLC for Ru and/or Pip compared to ZrMOF-based nanoformulations. Additionally, the EE for Ru or Pip significantly increased with Ti-MOF compared with Zr-MOF nanoformulations when used in single loading. In contrast, only Zr-MOF nanoformulations significantly increased EE of Ru and Pip loaded in combination compared to TiMOF material. The TiMOFTS-Pip nanoformulation had the maximum TLC for Pip (17.11 ± 1.43%), and the TiMOFTS-Ru nanoformulation had the maximum for Ru (15.56 ± 1.24%). The obtained TLCs for Ru and Pip are in line with previous reports for drugs loaded to various MOFs, such as DOX (∼16 wt.%; Bi et al., ) and gentamicin (19 wt.%; Soltani et al., ). In general, both TLC and EE were significantly affected by type of MOF material. Metal organic frameworks are excellent drug carrier due to the synergy of the effect of pores inside the framework and interaction with functional groups like amine and carboxylate groups. In the studied case, Ru and Pip can be loaded onto/into MOFs materials taking advantage of: (i) hydrogen bonding with free amino and carboxylate group, (ii) chemical bonding with free metal ion center (silicon) leading to form Si–O bond, and (iii) physical adsorption into pores of the framework via pi–pi staking . release kinetics The release from non-modified MOFs (ZrMOF-Ru, ZrMOF-Pip, TiMOF-Ru, and TiMOF-Pip) at pH 7.4 , resulted in fast release profiles, taking place within 24 hours. The release kinetics in Ti-based formulations showed a significant difference in mean release efficiency (MRE) value compared to their Zr analogues. The results suggest that the metal component of the nano carrier system might be the limiting factor in controlling the release profile of both Ru and Pip in nanoformulations. displays the cumulative release of Ru or Pip from nanoformulations as a function of time from nanoformulations designed using silane-modified MOFs. As shown in , at 48 hours, Ru or Pip nanoformulations had a cumulative release of >90%. For dual-delivery nanoformulations (ZrMOFTS-Ru-Pip and TiMOFTS-Ru-Pip), we calculated the release of each natural agent. Their release profiles (dotted lines) indicated that within 48 hours, ∼62% and ∼56% as Ru and Pip, respectively, released from the ZrMOFTS-Ru-Pip formulation and ∼71% and ∼65%, respectively, released from the TiMOFTS-Ru-Pip formulation. It is seen also that the release profiles are as follows. Zr-MOFTS-Pip showed a fast release, probably due to lack of chemical connection between the framework and drug structure. In contrast, Ti-MOFTS-Pip showed a slow release, probably due to chemical bonding between silicone metal center and drug. This effect can be used to control of the release behavior of drugs and for constructing novel drug carriers. Another feature of release from both MOF materials was that all release profiles could be described by two stages: a zero-order release effect as the first stage, within 12 hours, and a stable sustained release as the second stage, from 12 hours to the end of the experiment. As such, mixed release patterns are likely the result of having no burst in the first stage, with a slight increase in Ru or Pip release at the second stage. The observation of a two-stage pattern is in agreement with previous reports, including release of ibuprofen from various MOFs (Silva et al., ; Rojas et al., ; Sarker et al., ; Pham et al., ), 5-fluorouracil from Mg-MOFs (Hu et al., ), caffeine from ZrMOFs (Sarker & Jhung, ), and DOX from a zeolitic imidazolate MOF (Bi et al., ). Next, we fitted the release profiles of Ru and Pip obtained from both types of MOFs to the following kinetic models: zero-order, first-order, Hixson–Crowell, Korsmeyer–Peppas, and Higuchi. With linear regression modeling only, the results indicated that Ru and Pip were released from nanoformulations according to zero-order kinetics ( R 2 =0.98–0.99). On the other hand, investigation of the linear and non-linear regressions together showed that Korsmeyer–Peppas had the best fit ( R 2 =0.99–1.00) . Thus, the in vitro release of both Ru and Pip followed the same kinetics regardless of metal composition (Zr or Ti) within the nanocarrier MOFs. These results are consonant with those of earlier studies of the in vitro kinetics of various drug material release from different MOF structures (Li et al., ; Santos et al., ). Zero-order kinetics describes the release kinetics for drug diffusion from reservoir-based systems including MOFs, based on Fickian diffusion (Horcajada et al., ; Peppas & Narasimhan, ; Pham et al., ). Consequently, the zero-release up to about 12 hours demonstrates that MOF structures can efficiently control drug release without a premature lag or burst. Generally, the Korsmeyer–Peppas model is used to describe the surface degradation/erosion of a formulation containing the drug (Costa & Sousa Lobo, ; Rothstein et al., ). With surface erosion, degradation is restricted to the outermost surface of the porous system without affecting the interior (Pham et al., ). Comparing the release of Ru and Pip from both MOFs showed no significant differences for the mean cumulative release (MCR) release parameter. Thus, the MOFs did not affect the maximum released amount of these agents regardless of type. Concerning the MRE kinetic parameter, Ru and Pip from the Zr-MOF nanoformulations differed significantly but did not differ significantly in the Ti-MOF nanoformulations. Significant differences were observed in the other two kinetic parameters: mean release rate (MRR) and mean release time (MRT), the mean time required for maximum release of a drug or medical agent from its carrier system or dosage form. MRT is a kinetic parameter that is a function of either MRR or MRE or both. As MRT increases, both MRE and MRR are expected to decrease, unless influenced by other external factors. A decrease in MRT reflects a highly efficient system that allows easy release of a drug into the medium and a high degree of solubility, as indicated by a high MRR value. Furthermore, for loading of the Ru and Pip in combination, results showed that Ru kinetically exceeded Pip, given the highly significant difference between them for MCRP, MRR, and MRT. The amount of natural agent that is loaded directly affects the relation between MRT on one side and MRR and MRE on the other. Overall, Ti-based nanoformulations proved to be more efficient as nanocarrier systems for Ru and Pip singly or together compared to the Zr-based nanoformulations. A review of the literature shows that the release kinetics of Ru can vary depending on the drug carrier used in the various nanosystems but that it mostly releases with mixed kinetic mechanisms, similar to our results. In this way, the release pattern of Ru from solid lipid nanoparticles is a good fit to the first-order and Korsmeyer–Peppas models (Pandian et al., ), in keeping with results showing Eudragit nanosphere release with Korsmeyer–Peppas and phase II kinetics (Asfour & Mohsen, ), and mesoporous silica nanoparticle release through Higuchi and first-order kinetics (Karnopp et al., ). Concerning the co-delivery strategy, the release of Ru or Pip with other drugs also can occur via combined kinetic mechanisms. The release of Pip and DOX follows the kinetics of Korsmeyer–Peppas and n -value, suggesting a mechanism of Fickian’s diffusion from lecithin-chitosan nanoparticles (Alkholief, ). The release kinetics for Ru and curcumin co-delivery, however, showed a non-Fickian transport model from chitosan nanoparticles (Ramaswamy et al., ). It is seen that modification with silane of MOFs surface plays a crucial role for in vitro release kinetics. This becomes clear when the release profile of silane-free formulations is compared to silane-modified nanoformulations. The silane modification in MOFs leads to a long sustained release effect (within 48 hours) compared to fast/burst release effect (24 hours) for MPFs without silane modification. All surface-modified MOF carriers were shown to be convenient DDS for release of potent drugs, in small or extended doses. They versatility of their design permits tailoring of adequate DDSs, optimized to the desired scope of the suggested treatment plan, as per patient’s medical requirements (Ganesh et al., ). Preference is for silane-based MOF rather than the silane free MOF structures. studies The antioxidant and anti-inflammatory activities of Ru and Pip have previously been described (Selvendiran et al., ; Bang et al., ; Lee et al., ; Mahmoud, ; Ramaswamy et al., ; Enogieru et al., ). Here, we evaluated whether these properties can be enhanced using a nanoformulation system. Recent evidence indicates that nanoformulations could be an alternative way to improve the pharmacological effects of natural agents (Yavarpour-Bali et al., ). Additionally, the pharmacological effects may offer medical value against many neurodegenerative diseases (Khan et al., ), including Alzheimer’s, oxidative stress, Parkinson’s, and Huntington’s, possibly because of a shared underlying mechanism of neuronal loss, inflammation, and oxidative stress (Enogieru et al., ). Targeted therapies for these conditions are crucially needed because of the progressive neuronal loss and related impairments in cognition and memory (Aarsland et al., ; Magalingam et al., ). Amelioration of induced paw edema Paw edema induced in rats by carrageenan is a typical phlogistic agent for systemic evaluation of anti-inflammatory activity, and it is still used because of its non-antigenic nature and absence of noticeable adverse reactions (Eze et al., ). However, kaolin, because of its clay nature, may be preferential over carrageenan because it does not lead to antigenicity or hypersensitivity reactions. Therefore, the carrageenan and kaolin-induced edema have been combined into one model and widely used (Pashmforosh et al., ; Sur et al., ). We followed this model here. As shown in , the results demonstrated that nanoformulations containing Ru or Pip as single loading caused a significant inhibition in paw edema compared to nanoformulations containing combined loading of Ru and Pip together (G4 and G8). There was a greater percent inhibition with injection of G4* and G8* as a mixture of nanoformulations containing Ru and Pip, respectively. One explanation for this observation is the better anti-inflammatory action obtained for nanoformulations containing one natural agent rather than two natural agents, or a synergistic effect may have occurred. The results generally are in agreement with the in vitro release findings showing that single-agent nanoformulations released more agent than dual-agent nanoformulations. We note that rat paw edema reached a maximum inflammatory volume at three hours after the stimulant injection, which would explain the initially low inhibition percentage of the administered treatment in all groups. The significantly lower inhibition ability of the free Ru and Pip (Ref1 and Ref2) compared to the STD group is mainly attributed to the enhanced solubility of the standard drug. Briefly, these data showed significant inhibition exerted by nanoformulations compared to standard diclofenac drug or free Ru and/or Pip. The findings are in accordance with our previous data showing a notable anti-inflammatory influence in an induced versus free-form rat model for nanoformulation-based mesoporous silica nanoparticles for the flavonoid quercetin and shikimic acid (AbouAitah et al., ). Other studies also have reported this effect in animal tests (Xu et al., ; Rachmawati et al., ; de Almeida et al., ), showing the importance of nanoformulation delivery compared to the traditional methods of delivering anti-inflammatory drugs. Paw edema induced in rats by carrageenan is a typical phlogistic agent for systemic evaluation of anti-inflammatory activity, and it is still used because of its non-antigenic nature and absence of noticeable adverse reactions (Eze et al., ). However, kaolin, because of its clay nature, may be preferential over carrageenan because it does not lead to antigenicity or hypersensitivity reactions. Therefore, the carrageenan and kaolin-induced edema have been combined into one model and widely used (Pashmforosh et al., ; Sur et al., ). We followed this model here. As shown in , the results demonstrated that nanoformulations containing Ru or Pip as single loading caused a significant inhibition in paw edema compared to nanoformulations containing combined loading of Ru and Pip together (G4 and G8). There was a greater percent inhibition with injection of G4* and G8* as a mixture of nanoformulations containing Ru and Pip, respectively. One explanation for this observation is the better anti-inflammatory action obtained for nanoformulations containing one natural agent rather than two natural agents, or a synergistic effect may have occurred. The results generally are in agreement with the in vitro release findings showing that single-agent nanoformulations released more agent than dual-agent nanoformulations. We note that rat paw edema reached a maximum inflammatory volume at three hours after the stimulant injection, which would explain the initially low inhibition percentage of the administered treatment in all groups. The significantly lower inhibition ability of the free Ru and Pip (Ref1 and Ref2) compared to the STD group is mainly attributed to the enhanced solubility of the standard drug. Briefly, these data showed significant inhibition exerted by nanoformulations compared to standard diclofenac drug or free Ru and/or Pip. The findings are in accordance with our previous data showing a notable anti-inflammatory influence in an induced versus free-form rat model for nanoformulation-based mesoporous silica nanoparticles for the flavonoid quercetin and shikimic acid (AbouAitah et al., ). Other studies also have reported this effect in animal tests (Xu et al., ; Rachmawati et al., ; de Almeida et al., ), showing the importance of nanoformulation delivery compared to the traditional methods of delivering anti-inflammatory drugs. Inflammation induced by carrageenan/kaolin occurs in two phases. The first phase involves histamine, kinin, and serotonin release, and the second phase involves prostaglandin, protease, and lysosome enzyme release. The first phase proceeds during the first hour after stimulus injection, and the second phase carries over into hours 3 and 4 (Mondal et al., ). As shown in , nanoformulations significantly reduced the paw volume as compared to the free natural agents, STD, and or control groups. The results also were confirmed through leukocyte count, in which a decrease indicates the bio-efficiency of the injected substance/material through an anti-inflammatory effect. The maximum number of leukocytes migrating to the air pouch after stimulus injection was about 5.3 × 10 5 cells/mL and was found in control group C. This count was significantly higher than in any other group. The leukocyte count for the STD group was about 2.4 × 10 5 cells/mL, and the Ref1, Ref2, and Ref3 groups, treated respectively with free Ru, Pip, and their mixture, had significantly lower leukocyte count (respectively 3.4 × 10 5 cells/mL, 3.1 × 10 5 cells/mL, and 2.6 × 10 5 cells/mL) compared to the control and STD groups. Of the reference groups, Ref3 (Ru + Pip) had the lowest value, implying greater efficiency, possibly because of the bioenhancing nature of Pip in the drug combination. As noted in the rat paw edema experiment, diclofenac was rapidly eliminated compared to the extracts, whose effect lasted until the end of the experiment. All nanoformulation groups showed a significant reduction compared to other treatments, in agreement with previous reports (de Almeida et al., ). Additionally, groups G4* and G8* showed the best results, with the lowest leukocyte counts at 1.4 × 10 5 cells/mL and 1.1 × 10 5 cells/mL, respectively. A possible mechanism of action could be changed in leukocyte migration into tissues and the target organ, in addition to a putative anti-prostaglandin and antioxidant effect of both Ru and Pip. Evaluation of antioxidant effects in rats Among the most commonly used biomarkers in the assessment of antioxidant effectiveness is plasma antioxidant capacity. The idea is based on the network of a large number of endogenous antioxidants in plasma. These antioxidants can show complementary or synergistic behavior, providing efficient protection against reactive oxygen species. Among the methods available for evaluating antioxidant activity are FRAP, trolox equivalent antioxidant capacity, total radical absorption potential, and the radical scavenging activity of DPPH. In the present work, we used two complementary tests: DPPH scavenging activity and FRAP. Among the most commonly used biomarkers in the assessment of antioxidant effectiveness is plasma antioxidant capacity. The idea is based on the network of a large number of endogenous antioxidants in plasma. These antioxidants can show complementary or synergistic behavior, providing efficient protection against reactive oxygen species. Among the methods available for evaluating antioxidant activity are FRAP, trolox equivalent antioxidant capacity, total radical absorption potential, and the radical scavenging activity of DPPH. In the present work, we used two complementary tests: DPPH scavenging activity and FRAP. Oral administration of Ru and/or Pip either as free or nanoformulations led to a general enhancement of the plasma antioxidant capacity . Compared with results in the control group, this increase (15.25 ± 1.46%) was statistically significant, represented as the basal line in the figure. Also, loading of single Ru or Pip was associated with higher antioxidant effects than when the combination was loaded. Administration of G4* (40.20 ± 1.02) and G8* (45.10 ± 2.08) resulted in the highest significant antioxidant effect, compared to all other nanoformulations and standard drug. Thus, administering the two independent single-drug nanoformulations together by mixing them after preparation had a more significant effect than their administration in a co-delivery nanoformulation. The likely explanation is competition between Ru and Pip within the same MOF, as well as their shared antioxidant properties. Similarly, the plasma reducing power based on FRAP analysis indicated that G4* and G8* had the highest and most significant antioxidant effect compared to other groups and control. We can draw the following conclusions from the antioxidant and anti-inflammatory effects detected here. First, the use of MOF carriers improves these effects compared to free natural agents and standard drugs. These results are in accord with other reports on nanodelivery systems versus free natural agents (de Almeida et al., ). Second, a mixture of single-loaded Ru and Pip enhances the antioxidant and anti-inflammatory effects compared with dual-loading versions. Third, the anti-inflammatory and antioxidant effects of Ru and Pip appear to be quite similar. Finally, the TiMOF nanoformulations are more likely candidates for therapeutic delivery than Zr-MOF nanoformulations because of better enhancing of both effects. For these reasons, we suggest that TiMOFs may be promising for developing DDSs for natural agents. Future investigations are needed to target specific neurodegenerative disease models. A novel anti-inflammatory and antioxidant nanoformulation consisting of MOFs loaded with Ru (a flavonoid) and/or Pip (an alkaloid) was developed. The MOF carrier particles were surface modified to yield Ti-MOFTS and Zr-MOFTS. Nanoformulations loaded with one of the agents as well as both together were compared with the natural agents without carrier. Paw edema and leukocyte migration activity were significantly more reduced in rats intraperitoneally injected with nanoformulations than with free Ru and/or Pip. The best results were obtained when rats were injected with a nanoformulation containing a mixture of single-drug nanoformulations of Ru and Pip. A similar trend was observed for the antioxidant effect. Overall, a high total loading content was achieved, at 17.11 ± 1.43% for Pip and 15.56 ± 1.24% for Ru loading into Ti-MOF. For dual loading, Ti-MOFs could incorporate about 14% of Ru and about 13% of Pip, demonstrating not only the potential to load two agents but also a high loading capacity at ∼27%. The silane-modified MOFs showed a sustained release effect within 48 hours compared to un-modified MOFs where a fast release within 24 hours was observed. Release of the drugs from the silane-modified MOF carriers followed two stages, suggesting mixed release kinetics at pH 7.4. The first stage followed zero-order kinetics for the first 12 hours, and the second stage was a stable release from 12 up to 48 hours, fitting the Korsmeyer–Peppas model. The prepared nanoformulations showed predictable kinetic release patterns and important enhancement in anti-inflammatory and antioxidant activities over free natural agents in the in vivo studies. Supplemental Material Click here for additional data file.
A national training course for clinical trainers in family medicine
2a91ff4c-c2d3-44c7-8fec-0ea481bb4edc
10839166
Family Medicine[mh]
The main role of family physicians (FPs), as described in the national position article of the South African Academy of Family Physicians (SAAFP), is to strengthen the district health system. Family physicians are ideally situated to improve the quality of patient care and promote the safety of patients, through teamwork and capacity building, in district hospitals and primary healthcare. The national Train the Clinical Trainers (TCT) course was introduced in 2014, as a joint venture between the Royal College of General Practitioners (RCGP) and the SAAFP, to address the shortage of FPs and to improve the throughput of the nine South African training programmes. The strategy was to improve the quality of clinical training in the workplace in order to improve the availability and competence of newly qualified FPs in the country. If you want good clinicians, you need to train them in the same workplace where they will eventually practise independently. Vocational or job training for general practitioners and family medicine registrars is not new. As early as 1985, the Education Committee of the then South African Academy of Family Practice introduced informal vocational training, to address the shortage of doctors in rural areas. This education committee was also responsible for the training of clinical trainers between 1985 and 1990. However, the recognition of family medicine as a speciality in 2007 led to formal registrar posts and specialist training programmes that required competent clinical trainers. The vision of the TCT course is to promote and develop postgraduate family medicine training, although the skills are applicable to all workplace-based student training and capacity building of staff. The aim of the 5-day TCT is to equip workplace-based clinical trainers with an essential set of educational skills, which can be further developed through mentoring and support. The annual TCT incrementally creates a critical mass of clinical trainers who are competent and confident to train and assess registrars in the workplace. It also provides an opportunity for the professional development of trainers. The TCT course is built around the educational principle of learner-centredness. The design enables the course to be flexible, adaptable and context-specific. It was important to adapt the RCGP course to the South African context, where most training takes place on the district health platform. The course was aligned with the National Unit Standards for Family Medicine, as these defined the learning outcomes for registrars and what clinical trainers were attempting to achieve. All assessments, training and course content were designed with the National Unit Standards in mind. The use of assessment tools from the national portfolio of learning for registrars made the training more authentic. The role of the portfolio, to keep track of continuous, reliable and valid workplace-based assessment, is emphasised throughout the course. The course covers the roles and responsibilities of trainers and learners, the learning environment, alignment with the curriculum, assessment for and of learning, and leadership. gives an overview of the course. The importance of constructive alignment between learning outcomes, assessment, teaching methods and content is central to the training. Effective feedback as a learning tool is emphasised and demonstrated throughout the course. This includes feedback on feedback where trainers receive feedback on the way that they give feedback to trainees. On day 4 of the course, participants conduct a brief simulated training session, where they incorporate what they learned and then receive feedback on their training. Sessions focus on the learning needs of the learners to support both organisational and professional development. On the final day, participants perform a self-assessment of their expertise as a clinical trainer and create a learning plan for their further development. Participants are provided with all the course materials on a memory stick and if funds allow, they are also provided with the ‘Essential Handbook for GP Training and Education’. The Education and Training Committee of the SAAFP is responsible for ensuring the quality and content of the TCT course. During training, facilitators (trainers of the course) meet at the end of each day to identify what worked well and what could be done even better. This, together with the daily feedback from the participants (clinical trainers) is then incorporated into the training of the next day and the training plan is updated for future courses. The course facilitators also identify possible new facilitators who can be trained to help with future TCTs. New facilitators are supported by accredited facilitators and perform training under their supervision until they are ready to train on their own. To ensure equity, each university in South Africa is allowed to send two participants to the TCT course that takes place annually in a central venue. The SAAFP sponsors the costs of the facilitators and running the course, while the universities cover the costs of their participants in terms of travel and accommodation. The small group (18–20 participants) training allows deep learning and is led by 3–4 facilitators. Most participants are initially hesitant to engage in group activities. This is addressed by applying adult learning (andragogical) strategies during theoretical and practical sessions. Adults are internally motivated, goal-orientated and practical; therefore, clinical trainers should focus on facilitating learning rather than being prescriptive. At the end of each course, participants fill in an anonymous evaluation form to give feedback. This feedback includes the relevance of the topics, training methods used and facilitator evaluations. The course coordinator sends individual feedback to each facilitator on their performance and indicates areas of excellence and aspects that should be considered for further development. Participants are expected to share their personal development plans with their own departments. The intention is that their training sites are then visited by their departments to monitor and support their development as well as assess the learning environment. Unfortunately, this does not always happen. Two studies have evaluated the TCT. The first study looked at the impact of the TCT by visiting five participants in their workplaces. The measurement consisted of video recordings of registrar training, a pre-visit self-assessment form, interviews with participants and registrars and an assessment of the learning portfolio of each registrar. The participants identified positive changes after the TCT course as their training became more learner-centred; they structured registrar training better, and they were more confident as clinical trainers. The difficulties with the logistics of site visits were also identified. In the second article, a 360-degree evaluation and self-assessment were performed by trainers after participation in the TCT course. The results showed significant improvement in their clinical training 3 months after the course. Currently, the training programmes are adopting an e-portfolio (based on SCORION software) and introducing entrustable professional activities (EPAs) for workplace-based assessment. The e-portfolio has been included in the TCT to practically demonstrate some of the assessment tools and enable clinical trainers to use the software. By using the tools in a protected environment during the course, trainers gain confidence and a better understanding of concepts, such as benchmarking and competence. In 2023, the first group of TCT facilitators were accredited by the SAAFP as competent and effective clinical trainers in family medicine. There is an opportunity for TCT participants to also be accredited as clinical trainers by submitting two reports of satisfactory on-site developmental visits and a 360-degree assessment of their expertise to the Education and Training Committee of the SAAFP. The intention is to incentivise clinical trainers to develop the expertise necessary for accreditation and to receive recognition. The next step is to provide regular support and updates to the 166 national and 25 international trainees of the TCT course through establishing a special interest group for clinical trainers within the SAAFP. As the training evolves, many previous trainees will need updates on new developments in workplace-based assessment and training. The group will utilise webinars, ad hoc training events and workshops, particularly at the National Family Practitioners Conference, to connect and capacitate clinical trainers. Through the primary care and family medicine (PRIMAFAMED) network in sub-Saharan Africa, we would like to expand the training to more African countries. This was done previously through the Family Medicine Leadership, Education and Assessment Programme (FaMLEAP). In its current format, the TCT course is relevant and sustainable as all training facilities are committed to improving workplace-based training and assessment. The course is ideal for improving both theoretical and practical training practices, as it is very interactive and participants learn from each other. We believe that the training of FPs is in good hands, if we continue to train competent and confident clinical trainers.
Asia‐inclusive drug development leveraging principles of
e3b99258-0bcb-4f62-a81a-adc5d798bcb2
11500040
Pharmacology[mh]
In recent years, the pharmaceutical industry has witnessed a significant shift in drug development strategies, particularly with inclusivity and global reach. Although implementation of the International Conference on Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) E5 (ethnic bridging) guidelines is common, drug development programs initiated in the Western region have traditionally included Asian populations at later timepoints due to regulatory and administrative barriers. This has led to delays in bringing new therapies to Asian regions. With the recent regulatory reform in China and its drug administration joining the ICH, early participation in global clinical development is now feasible due to substantial streamlining of regulatory processes and application of ICH E17 principles for efficient design and analysis of multiregional clinical trials (MRCTs). This paradigm shift has paved the way for Asia‐inclusive drug development, where countries such as Japan and China have emerged as key players in early clinical development. Complementing ICH E5 and E17 guidelines is the proposed ICH M15 guideline for implementing model‐informed drug development (MIDD). MIDD enhances quantitative understanding of variability in dose/exposure–response and should, therefore, enable a scientifically rigorous approach to Asia‐inclusive drug development aligned with ICH E17, that is, consideration of drug‐ and disease‐related intrinsic and extrinsic factors in the design of MRCTs. Herein, we share recent case examples illustrating application of these concepts for Asia‐inclusive drug development across therapeutic areas (oncology, neurology, and immunology) in the EMD Serono portfolio from early through late‐stage clinical development. These cases illustrate application of population pharmacokinetic (PK) (tuvusertib, enpatoran, tepotinib, berzosertib), PK/pharmacodynamic (PD) (tuvusertib, enpatoran, cladribine), and disease progression (enpatoran) modeling and simulation to assess consistency in the drug‐ and/or disease‐related intrinsic and extrinsic factors and support dosage for Asian populations. We aim to demonstrate the value of MIDD and a Totality of Evidence approach enabling timely Asia‐inclusive MRCTs in the context of a global phase I (tuvusertib), phase II (enpatoran and berzosertib), or pivotal registrational (tepotinib) clinical trial, and as a key component of evidence generation for characterizing benefit/risk of a drug in Asian populations (cladribine). Tuvusertib Tuvusertib is a potent, selective, orally administered ataxia telangiectasia, and Rad3‐related (ATR) protein kinase inhibitor being evaluated as an anticancer drug in phase I and II trials as monotherapy and in combination settings. , In part A1 (monotherapy dose escalation) of an open‐label, first‐in‐human (FIH) phase I trial (NCT04170153), tuvusertib (5–270 mg once daily [q.d.]) was evaluated in patients with advanced solid tumors ( N = 55). Tuvusertib was tolerated up to 180 mg q.d.: the maximum tolerated dose (MTD) under continuous dosing. The most frequently reported dose‐limiting toxicity (DLT) was anemia during the dose‐escalation phase, which was dose‐ and exposure‐related. Tuvusertib was rapidly absorbed with dose‐proportional PK up to 180 mg, and a mean half‐life ( t 1/2 ) from ~1.2 to 5.6 h, with minimum accumulation following q.d. administration. Exposure‐related target engagement, that is >80% inhibition of ɤ‐H2AX in blood, was attained at doses ≥130 mg. Tuvusertib 180 mg q.d. administered in a 2‐week on/1‐week off schedule demonstrated a favorable safety profile and was declared the recommended dose for the expansion for monotherapy. A preliminary assessment of ethnic sensitivity of tuvusertib was performed based on ICH E5 principles and available clinical PK and safety data in the dose escalation phase. Based on the in vitro studies, aldehyde oxidase (AO) is expected to be the primary enzyme that metabolizes tuvusertib. In vitro and clinical PK data from drugs metabolized by AO indicate no relevant ethnic differences in AO function. , , In the ongoing Western phase I trial, consistency in tuvusertib exposure in Asian ( N = 5) and non‐Asian ( N = 50) populations was ascertained by overlaying individual area under the concentration–time curve during dosing interval at steady state (AUC τ,ss ) values for patients of Asian origin over the 90% prediction interval (PI) of the dose–AUC τ,ss relationship for the study population, estimated using a power model based on full population (Figure ). The effect of tuvusertib on hemoglobin (Hb) was characterized in a longitudinal semi‐mechanistic, multivariate population PK/PD model to predict the time course of reticulocytes, red blood cells, and Hb. Model‐based simulations of multicycle tuvusertib treatment indicated that Hb reduction in patients of Asian origin was within the 90% PI of Hb reduction in patients of non‐Asian origin at the corresponding doses, suggesting no evidence of difference in Hb reduction between Asian and non‐Asian populations (Figure ). Taken together, the Totality of Evidence supported an expectation of low ethnic sensitivity. Hence, a common dosage of tuvusertib aligned with the recommended dose for the expansion, established in the Western population, was thus selected (during A1) for further evaluation in China and Japan dose confirmation cohorts (country‐specific A4 and A5), as an early enabler for Asia‐inclusive drug development in future global studies. Cohorts 4 and 5 (six to nine patients each) were aimed to confirm consistency in safety and PK in Asian patients for monotherapy. Importantly, the above‐described initial assessment of low risk for ethnic sensitivity was instrumental in enabling an Asia‐inclusive FIH MRCT where both Japan and China could join dose confirmation cohorts without the need for a dose escalation design. Of note, the associated clinical trial applications filed with the Pharmaceutical and Medical Devices Agency (PMDA) in Japan and the Center for Drug Evaluation (CDE) in China were supported by the above‐discussed ethnic sensitivity assessments, leveraging data from the dose escalation phase. Notably, we also leveraged the regulatory reforms in China, enabling China to join clinical development as early as completion of the dose escalation phase of the FIH study. Enpatoran Enpatoran is a novel, highly selective, and potent dual toll‐like receptor (TLR) 7/8 inhibitor currently under development for the treatment of autoimmune disorders including systemic lupus erythematosus (SLE), cutaneous lupus erythematosus (CLE), and myositis. Completed phase I and phase II studies evaluated the PK, PD, and safety of enpatoran in healthy volunteers and patients with coronavirus disease 2019 (COVID‐19). , The Western FIH, phase I study (NCT03676322) in healthy participants demonstrated that orally administered enpatoran was well tolerated, with dose‐proportional PK and a t 1/2 of 6.8 to 10.6 h. PD results showed effective TLR7/8 target modulation demonstrated by exposure‐dependent inhibition of ex vivo‐stimulated cytokine (interleukin‐6 and interferon‐α) release. PK and PD data from this study were used to develop population PK/PD models, which, in combination with safety data in humans and preclinical efficacy and PK/PD data, supported the investigation of enpatoran 25, 50, and 100 mg twice a day in patients with SLE or CLE. The ongoing, 24‐week, phase II study (WILLOW; NCT05162586) is evaluating enpatoran in patients with SLE or CLE. To enable patients from Japan and China to be a part of the global WILLOW study, a holistic integration of drug and disease knowledge using quantitative clinical pharmacology methods was performed based on the results from an ethno‐bridging study (NCT04880213) and expanding on previously published population PK/PD data and SLE disease trajectory modeling (DTM) results. The ethno‐bridging study in Caucasian and Japanese healthy subjects matched by body weight, height, and sex demonstrated comparable PK/PD properties for enpatoran in represented Asian and Caucasian subjects across single 100, 200, and 300 mg orally administered doses. DTM suggested no significant differences in SLE disease trajectory for patients of Asian and non‐Asian origin. While a quantitative DTM was performed in this case, also aligned with broader contexts of use of such a MIDD framework, alternate approaches (e.g., systematic review of the literature) are recommended when investment in development of such pharmacometric models is not feasible. AO is considered to be a key contributor to enpatoran metabolism, and in vitro and clinical PK data from marketed drugs metabolized by AO indicate no relevant ethnic differences in AO function. , Enpatoran absorption or disposition is not expected to be influenced by drug transporters. Based on Totality of Evidence principles (Figure ), inclusion of Japanese and Chinese patients in MRCTs was supported. The data and integrative analyses presented were foundational for regulatory review by the Health Authorities (the PMDA in Japan and the CDE in China), and enabled the inclusion of Asian patients in the ongoing global phase II WILLOW study, confirmed by the respective regulatory consultations. China could join the global phase II study without the need to conduct a dedicated Chinese bridging study. Designing this phase II trial as an Asia‐inclusive MRCT should enable timely learning regarding the efficacy, safety, and associated exposure–response relationships of enpatoran in the target patient populations to further evaluate consistency across global patient populations and enable seamless globalization of potential late‐stage trials following proof‐of‐concept. Berzosertib Berzosertib is an intravenously administered ATR inhibitor that was under evaluation for multiple cancer types as monotherapy or in combination with chemotherapeutics. Berzosertib is also currently being evaluated in combination with lurbinectedin and sacituzumab govitecan. , Berzosertib has moderate to high clearance (~60 L/h), high volume of distribution ( V d ; ~1250 L), and t 1/2 of 17 h. Berzosertib PK is dose‐linear (18–480 mg/m 2 ) and unchanged upon coadministration of combination drugs. It is generally well tolerated as monotherapy with no DLTs at doses up to 480 mg/m 2 ( Data on file ). A population PK analysis was performed on data from 240 patients in the Western clinical trials. This dataset included five patients of Asian race (non‐Japanese). Graphical explorations of demographic covariate variables against between‐subject random effects estimates for clearance and V d showed that neither race nor ethnicity had relevant relationships, indicating expectation of lack of relevant differences in berzosertib PK in Asian versus non‐Asian patients. In addition, berzosertib is dosed on a body surface area‐adjusted basis, which should help bridge demographic differences in body size for Asian versus Western patients. Taken together, it was considered that berzosertib is likely not sensitive to ethnic factors. In an investigator‐initiated study (NCT02487095), the combination of berzosertib and topotecan was evaluated in patients with relapsed SCLC. The recommended phase II dose (RP2D) of this combination was topotecan 1.25 mg/m 2 (Days 1–5) and berzosertib 210 mg/m 2 (Days 2 and 5) in 21‐day cycles. An objective response rate (ORR) of 36% was observed. Subsequently, a global phase II pivotal trial of the berzosertib–topotecan combination in patients with relapsed, platinum‐resistant SCLC (NCT04768296) including Japan and China was designed (Figure ). Since no data were available on berzosertib in Japanese patients and the RP2D of topotecan/berzosertib was considered the MTD, a Japan‐only safety run‐in part with two dose levels (DL, DL1: 105 mg/m 2 berzosertib [Days 2 and 5] + 1.25 mg/m 2 topotecan [Days 1–5]; DL2: 210 mg/m 2 berzosertib [Days 2 and 5] + 1.25 mg/m 2 Topotecan [Days 1–5]) was included. A total of three to nine Japanese patients with advanced solid tumors were to receive DL1. If DL1 was tolerated, patients were to receive DL2. If DL2 was tolerated, patients were to be enrolled into the main part of the study. Based on the above justification, alignment with PMDA on a safety run‐in strategy in Japan without a standalone phase I PK/safety assessment was achieved. The safety run‐in portion of the study was completed, and exposure (area under the plasma concentration–time curve extrapolated to infinity [AUC inf ] and maximum serum concentration [ C max ]; Figure ) and safety data were consistent between Japanese and non‐Japanese patients. Based on these results, both Japan and China were able to join the primary/main cohort of the global phase II study directly. This example illustrates the participation of Asia in a global oncology pivotal study without a dedicated phase I PK/safety study. Tepotinib Tepotinib is a highly selective, potent, mesenchymal–epithelial transition factor (MET) inhibitor. Tepotinib 450 mg q.d. is approved for the treatment of non‐small cell lung cancer (NSCLC) with MET ex14 skipping alterations in many Asian and non‐Asian countries, based on efficacy data from a multiregional pivotal single‐arm phase II study (NCT02864992, VISION). Tepotinib was first approved in Japan in March 2020 based on Cohort A of VISION, where it received SAKIGAKE designation and was the first global approval for a MET inhibitor. In February 2021, the Food and Drugs Administration (FDA) approved tepotinib for the treatment of adult patients with metastatic NSCLC harboring MET ex14 skipping alterations. Due to the regulatory reforms as an ICH country, China could join a global MRCT and use the totality of study results for drug registration in China instead of a dedicated regional study. After CDE consultation, to meet the registration requirement on sample size and also confirmed by statistical simulation that 20% of total patients in Cohort C provide adequate power to show treatment effect consistency with the global population, the VISION study protocol was amended for a China‐specific extension to allow more time for enrolment in China (Figure ). Enrolment in Cohorts A and C of VISION was completed as planned with 152 patients enrolled in Cohort A and 161 patients (including 30 mainland Chinese patients) enrolled in Cohort C. During clinical development, PK of tepotinib was assessed in patients with cancer at doses of 30–1400 mg q.d. The PK properties of tepotinib, that is, t 1/2 of ~32 h and time‐independent clearance, support q.d. dosing. Population PK analysis indicated no relevant effects of race (Caucasian, Japanese, and other East Asian), age, sex, body weight, mild/moderate hepatic impairment, and mild/moderate renal impairment. In addition, rich PK sampling from phase I study (NCT01832506) in Japanese patients with solid tumors confirmed similar exposure to that in the phase I study in Western patients. , All patients in VISION provided sparse PK data, and the effect of ethnic factors on the PK of tepotinib was further investigated by comparing individual popPK model‐predicted AUC τ,ss at the clinical dose of 450 mg q.d. and confirmed consistent clinical exposures across races/ethnic groups (Figure ). This finding reinforces the rationale of the VISION study as an Asia‐inclusive MRCT following ICH E17 principles, which was accepted by the regulatory authorities across regions and countries as the primary source of evidence to support marketing approval, including the recent approval of tepotinib in China in December 2023. Cladribine Cladribine (2‐chloro‐2′‐deoxyadenosine) is a synthetic chlorinated analog of deoxyadenosine. It is converted to its active triphosphate form, 2‐chlorodeoxyadenosine 5′‐triphosphate upon phosphorylation by deoxycytidine kinase (DCK) and two additional kinases. Due to the high constitutive expression of DCK in lymphocytes, the DCK to 5′‐nucleotidase ratio favors phosphorylation of cladribine. This leads to selective depletion of dividing and non‐dividing B and T cells. Cladribine is indicated for the treatment of relapsing multiple sclerosis (RMS) in adult patients and is approved in >75 countries and regions (including Hong Kong, Taiwan, and South Korea). Based on the data on Asian patients collected during global clinical development of cladribine and assessment of the impact of ethnic factors using ICH E5 principles, it is concluded that cladribine does not demonstrate ethnic sensitivity. Cladribine has a unique PK/PD profile with a short elimination t 1/2 (~1 day) relative to a prolonged PD effect on specific immune cells (most notably a reversible reduction in B and T lymphocyte counts). This results in a short intermittent dosing schedule (up to 20 days over 2 years of treatment). Cladribine has dose‐linear PK following oral administration with a typical log‐normal distribution of apparent clearance, without evidence for skewness, bimodality, or outliers in the overall distribution of PK parameters. Global clinical studies were conducted primarily in Caucasian patients, in part due to the distinctly higher prevalence of RMS in Western regions. Although the participation of Asian patients in the development program was limited, reflecting the status of RMS as a rare disease in Asia due to the low prevalence of RMS in Asian populations, a Totality of Evidence approach was used to demonstrate favorable benefit/risk profile of cladribine for treatment of RMS in Asian patients. The absence of ethnic sensitivity and a common dosage of cladribine across Asian and non‐Asian patient populations was confirmed using population PD modeling and simulation of treatment‐related reduction in absolute lymphocyte count (ALC) (Figure ), a PD biomarker of relevance for both safety and efficacy of cladribine. Of 1318 patients in the phase III studies that contributed to population PD modeling of ALC dynamics, 24 were Asian. The time course of change in ALC following cladribine treatment in Asian patients could be quantitatively described well by the mechanism‐based population PD model developed from a global patient population without requiring any additional considerations of ethnicity or race. This example illustrates the value of holistic integration of available data using a MIDD approach and a Totality of Evidence mindset to evaluate ethnic sensitivity in support of Asia‐inclusive development and use of the drug in a rare disease in Asian populations. Tuvusertib is a potent, selective, orally administered ataxia telangiectasia, and Rad3‐related (ATR) protein kinase inhibitor being evaluated as an anticancer drug in phase I and II trials as monotherapy and in combination settings. , In part A1 (monotherapy dose escalation) of an open‐label, first‐in‐human (FIH) phase I trial (NCT04170153), tuvusertib (5–270 mg once daily [q.d.]) was evaluated in patients with advanced solid tumors ( N = 55). Tuvusertib was tolerated up to 180 mg q.d.: the maximum tolerated dose (MTD) under continuous dosing. The most frequently reported dose‐limiting toxicity (DLT) was anemia during the dose‐escalation phase, which was dose‐ and exposure‐related. Tuvusertib was rapidly absorbed with dose‐proportional PK up to 180 mg, and a mean half‐life ( t 1/2 ) from ~1.2 to 5.6 h, with minimum accumulation following q.d. administration. Exposure‐related target engagement, that is >80% inhibition of ɤ‐H2AX in blood, was attained at doses ≥130 mg. Tuvusertib 180 mg q.d. administered in a 2‐week on/1‐week off schedule demonstrated a favorable safety profile and was declared the recommended dose for the expansion for monotherapy. A preliminary assessment of ethnic sensitivity of tuvusertib was performed based on ICH E5 principles and available clinical PK and safety data in the dose escalation phase. Based on the in vitro studies, aldehyde oxidase (AO) is expected to be the primary enzyme that metabolizes tuvusertib. In vitro and clinical PK data from drugs metabolized by AO indicate no relevant ethnic differences in AO function. , , In the ongoing Western phase I trial, consistency in tuvusertib exposure in Asian ( N = 5) and non‐Asian ( N = 50) populations was ascertained by overlaying individual area under the concentration–time curve during dosing interval at steady state (AUC τ,ss ) values for patients of Asian origin over the 90% prediction interval (PI) of the dose–AUC τ,ss relationship for the study population, estimated using a power model based on full population (Figure ). The effect of tuvusertib on hemoglobin (Hb) was characterized in a longitudinal semi‐mechanistic, multivariate population PK/PD model to predict the time course of reticulocytes, red blood cells, and Hb. Model‐based simulations of multicycle tuvusertib treatment indicated that Hb reduction in patients of Asian origin was within the 90% PI of Hb reduction in patients of non‐Asian origin at the corresponding doses, suggesting no evidence of difference in Hb reduction between Asian and non‐Asian populations (Figure ). Taken together, the Totality of Evidence supported an expectation of low ethnic sensitivity. Hence, a common dosage of tuvusertib aligned with the recommended dose for the expansion, established in the Western population, was thus selected (during A1) for further evaluation in China and Japan dose confirmation cohorts (country‐specific A4 and A5), as an early enabler for Asia‐inclusive drug development in future global studies. Cohorts 4 and 5 (six to nine patients each) were aimed to confirm consistency in safety and PK in Asian patients for monotherapy. Importantly, the above‐described initial assessment of low risk for ethnic sensitivity was instrumental in enabling an Asia‐inclusive FIH MRCT where both Japan and China could join dose confirmation cohorts without the need for a dose escalation design. Of note, the associated clinical trial applications filed with the Pharmaceutical and Medical Devices Agency (PMDA) in Japan and the Center for Drug Evaluation (CDE) in China were supported by the above‐discussed ethnic sensitivity assessments, leveraging data from the dose escalation phase. Notably, we also leveraged the regulatory reforms in China, enabling China to join clinical development as early as completion of the dose escalation phase of the FIH study. Enpatoran is a novel, highly selective, and potent dual toll‐like receptor (TLR) 7/8 inhibitor currently under development for the treatment of autoimmune disorders including systemic lupus erythematosus (SLE), cutaneous lupus erythematosus (CLE), and myositis. Completed phase I and phase II studies evaluated the PK, PD, and safety of enpatoran in healthy volunteers and patients with coronavirus disease 2019 (COVID‐19). , The Western FIH, phase I study (NCT03676322) in healthy participants demonstrated that orally administered enpatoran was well tolerated, with dose‐proportional PK and a t 1/2 of 6.8 to 10.6 h. PD results showed effective TLR7/8 target modulation demonstrated by exposure‐dependent inhibition of ex vivo‐stimulated cytokine (interleukin‐6 and interferon‐α) release. PK and PD data from this study were used to develop population PK/PD models, which, in combination with safety data in humans and preclinical efficacy and PK/PD data, supported the investigation of enpatoran 25, 50, and 100 mg twice a day in patients with SLE or CLE. The ongoing, 24‐week, phase II study (WILLOW; NCT05162586) is evaluating enpatoran in patients with SLE or CLE. To enable patients from Japan and China to be a part of the global WILLOW study, a holistic integration of drug and disease knowledge using quantitative clinical pharmacology methods was performed based on the results from an ethno‐bridging study (NCT04880213) and expanding on previously published population PK/PD data and SLE disease trajectory modeling (DTM) results. The ethno‐bridging study in Caucasian and Japanese healthy subjects matched by body weight, height, and sex demonstrated comparable PK/PD properties for enpatoran in represented Asian and Caucasian subjects across single 100, 200, and 300 mg orally administered doses. DTM suggested no significant differences in SLE disease trajectory for patients of Asian and non‐Asian origin. While a quantitative DTM was performed in this case, also aligned with broader contexts of use of such a MIDD framework, alternate approaches (e.g., systematic review of the literature) are recommended when investment in development of such pharmacometric models is not feasible. AO is considered to be a key contributor to enpatoran metabolism, and in vitro and clinical PK data from marketed drugs metabolized by AO indicate no relevant ethnic differences in AO function. , Enpatoran absorption or disposition is not expected to be influenced by drug transporters. Based on Totality of Evidence principles (Figure ), inclusion of Japanese and Chinese patients in MRCTs was supported. The data and integrative analyses presented were foundational for regulatory review by the Health Authorities (the PMDA in Japan and the CDE in China), and enabled the inclusion of Asian patients in the ongoing global phase II WILLOW study, confirmed by the respective regulatory consultations. China could join the global phase II study without the need to conduct a dedicated Chinese bridging study. Designing this phase II trial as an Asia‐inclusive MRCT should enable timely learning regarding the efficacy, safety, and associated exposure–response relationships of enpatoran in the target patient populations to further evaluate consistency across global patient populations and enable seamless globalization of potential late‐stage trials following proof‐of‐concept. Berzosertib is an intravenously administered ATR inhibitor that was under evaluation for multiple cancer types as monotherapy or in combination with chemotherapeutics. Berzosertib is also currently being evaluated in combination with lurbinectedin and sacituzumab govitecan. , Berzosertib has moderate to high clearance (~60 L/h), high volume of distribution ( V d ; ~1250 L), and t 1/2 of 17 h. Berzosertib PK is dose‐linear (18–480 mg/m 2 ) and unchanged upon coadministration of combination drugs. It is generally well tolerated as monotherapy with no DLTs at doses up to 480 mg/m 2 ( Data on file ). A population PK analysis was performed on data from 240 patients in the Western clinical trials. This dataset included five patients of Asian race (non‐Japanese). Graphical explorations of demographic covariate variables against between‐subject random effects estimates for clearance and V d showed that neither race nor ethnicity had relevant relationships, indicating expectation of lack of relevant differences in berzosertib PK in Asian versus non‐Asian patients. In addition, berzosertib is dosed on a body surface area‐adjusted basis, which should help bridge demographic differences in body size for Asian versus Western patients. Taken together, it was considered that berzosertib is likely not sensitive to ethnic factors. In an investigator‐initiated study (NCT02487095), the combination of berzosertib and topotecan was evaluated in patients with relapsed SCLC. The recommended phase II dose (RP2D) of this combination was topotecan 1.25 mg/m 2 (Days 1–5) and berzosertib 210 mg/m 2 (Days 2 and 5) in 21‐day cycles. An objective response rate (ORR) of 36% was observed. Subsequently, a global phase II pivotal trial of the berzosertib–topotecan combination in patients with relapsed, platinum‐resistant SCLC (NCT04768296) including Japan and China was designed (Figure ). Since no data were available on berzosertib in Japanese patients and the RP2D of topotecan/berzosertib was considered the MTD, a Japan‐only safety run‐in part with two dose levels (DL, DL1: 105 mg/m 2 berzosertib [Days 2 and 5] + 1.25 mg/m 2 topotecan [Days 1–5]; DL2: 210 mg/m 2 berzosertib [Days 2 and 5] + 1.25 mg/m 2 Topotecan [Days 1–5]) was included. A total of three to nine Japanese patients with advanced solid tumors were to receive DL1. If DL1 was tolerated, patients were to receive DL2. If DL2 was tolerated, patients were to be enrolled into the main part of the study. Based on the above justification, alignment with PMDA on a safety run‐in strategy in Japan without a standalone phase I PK/safety assessment was achieved. The safety run‐in portion of the study was completed, and exposure (area under the plasma concentration–time curve extrapolated to infinity [AUC inf ] and maximum serum concentration [ C max ]; Figure ) and safety data were consistent between Japanese and non‐Japanese patients. Based on these results, both Japan and China were able to join the primary/main cohort of the global phase II study directly. This example illustrates the participation of Asia in a global oncology pivotal study without a dedicated phase I PK/safety study. Tepotinib is a highly selective, potent, mesenchymal–epithelial transition factor (MET) inhibitor. Tepotinib 450 mg q.d. is approved for the treatment of non‐small cell lung cancer (NSCLC) with MET ex14 skipping alterations in many Asian and non‐Asian countries, based on efficacy data from a multiregional pivotal single‐arm phase II study (NCT02864992, VISION). Tepotinib was first approved in Japan in March 2020 based on Cohort A of VISION, where it received SAKIGAKE designation and was the first global approval for a MET inhibitor. In February 2021, the Food and Drugs Administration (FDA) approved tepotinib for the treatment of adult patients with metastatic NSCLC harboring MET ex14 skipping alterations. Due to the regulatory reforms as an ICH country, China could join a global MRCT and use the totality of study results for drug registration in China instead of a dedicated regional study. After CDE consultation, to meet the registration requirement on sample size and also confirmed by statistical simulation that 20% of total patients in Cohort C provide adequate power to show treatment effect consistency with the global population, the VISION study protocol was amended for a China‐specific extension to allow more time for enrolment in China (Figure ). Enrolment in Cohorts A and C of VISION was completed as planned with 152 patients enrolled in Cohort A and 161 patients (including 30 mainland Chinese patients) enrolled in Cohort C. During clinical development, PK of tepotinib was assessed in patients with cancer at doses of 30–1400 mg q.d. The PK properties of tepotinib, that is, t 1/2 of ~32 h and time‐independent clearance, support q.d. dosing. Population PK analysis indicated no relevant effects of race (Caucasian, Japanese, and other East Asian), age, sex, body weight, mild/moderate hepatic impairment, and mild/moderate renal impairment. In addition, rich PK sampling from phase I study (NCT01832506) in Japanese patients with solid tumors confirmed similar exposure to that in the phase I study in Western patients. , All patients in VISION provided sparse PK data, and the effect of ethnic factors on the PK of tepotinib was further investigated by comparing individual popPK model‐predicted AUC τ,ss at the clinical dose of 450 mg q.d. and confirmed consistent clinical exposures across races/ethnic groups (Figure ). This finding reinforces the rationale of the VISION study as an Asia‐inclusive MRCT following ICH E17 principles, which was accepted by the regulatory authorities across regions and countries as the primary source of evidence to support marketing approval, including the recent approval of tepotinib in China in December 2023. Cladribine (2‐chloro‐2′‐deoxyadenosine) is a synthetic chlorinated analog of deoxyadenosine. It is converted to its active triphosphate form, 2‐chlorodeoxyadenosine 5′‐triphosphate upon phosphorylation by deoxycytidine kinase (DCK) and two additional kinases. Due to the high constitutive expression of DCK in lymphocytes, the DCK to 5′‐nucleotidase ratio favors phosphorylation of cladribine. This leads to selective depletion of dividing and non‐dividing B and T cells. Cladribine is indicated for the treatment of relapsing multiple sclerosis (RMS) in adult patients and is approved in >75 countries and regions (including Hong Kong, Taiwan, and South Korea). Based on the data on Asian patients collected during global clinical development of cladribine and assessment of the impact of ethnic factors using ICH E5 principles, it is concluded that cladribine does not demonstrate ethnic sensitivity. Cladribine has a unique PK/PD profile with a short elimination t 1/2 (~1 day) relative to a prolonged PD effect on specific immune cells (most notably a reversible reduction in B and T lymphocyte counts). This results in a short intermittent dosing schedule (up to 20 days over 2 years of treatment). Cladribine has dose‐linear PK following oral administration with a typical log‐normal distribution of apparent clearance, without evidence for skewness, bimodality, or outliers in the overall distribution of PK parameters. Global clinical studies were conducted primarily in Caucasian patients, in part due to the distinctly higher prevalence of RMS in Western regions. Although the participation of Asian patients in the development program was limited, reflecting the status of RMS as a rare disease in Asia due to the low prevalence of RMS in Asian populations, a Totality of Evidence approach was used to demonstrate favorable benefit/risk profile of cladribine for treatment of RMS in Asian patients. The absence of ethnic sensitivity and a common dosage of cladribine across Asian and non‐Asian patient populations was confirmed using population PD modeling and simulation of treatment‐related reduction in absolute lymphocyte count (ALC) (Figure ), a PD biomarker of relevance for both safety and efficacy of cladribine. Of 1318 patients in the phase III studies that contributed to population PD modeling of ALC dynamics, 24 were Asian. The time course of change in ALC following cladribine treatment in Asian patients could be quantitatively described well by the mechanism‐based population PD model developed from a global patient population without requiring any additional considerations of ethnicity or race. This example illustrates the value of holistic integration of available data using a MIDD approach and a Totality of Evidence mindset to evaluate ethnic sensitivity in support of Asia‐inclusive development and use of the drug in a rare disease in Asian populations. The pharmaceutical industry has recognized the need for a paradigm shift in global drug development strategies. The transition from bridging approaches to simultaneous global development, with a specific focus on Asia‐inclusive drug development, has become a priority. Supported by regulatory guidelines such as ICH E5 and E17, drug developers are equipped with a framework to consider ethnic factors, evaluate variability, and refine their approach to meet the needs of diverse populations. China in addition to Japan has emerged as a significant contributor to Asia‐inclusive drug development due to its robust regulatory framework and growing market influence. By encouraging early‐phase development within its borders, China has provided opportunities to expedite clinical trials and generate valuable data on drug response in diverse populations, as illustrated here in case studies of China‐inclusive early‐phase MRCTs (e.g., tuvusertib FIH, enpatoran phase II). This inclusive approach has enabled pharmaceutical companies to gain insights into the efficacy and safety of investigational therapies in Asia, ultimately benefiting patients worldwide. While the examples reviewed here are for small molecules, the principles and strategies are equally applicable to biologics. Protein therapeutics like monoclonal antibodies are generally less sensitive to known sources of ethnic variability, although knowledge of inter‐population variability in target expression and disease burden are important considerations while evaluating the risk for potential ethnic sensitivity in PK/PD properties. The role of clinical pharmacology concepts and MIDD approaches cannot be understated in the pursuit of Asia‐inclusive drug development. Viewed from a broader perspective, reflecting on the examples presented here, we recommend timely quantitative characterization of (a) dose–exposure relationships, (b) ADME mechanisms, (c) therapeutic index based on exposure–response relationships for efficacy and safety, and (d) intrinsic and extrinsic sources of variability in disease biology and patient outcomes, as foundational pillars for ethnic sensitivity assessment to enable Asia‐inclusive drug development through MRCT design guided by ICH E5 and E17 principles. A robust understanding of ADME mechanisms and therapeutic index are particularly vital when dealing with complex modalities (e.g., antibody–drug conjugates) and in the setting of non‐linear PK. In this context, it is important to note that every patient's data matters, even when they represent the minority in the enrolled population and analysis dataset. We have demonstrated this across examples (tuvusertib FIH PK and safety analyses, berzosertib end of phase II population PK analysis, and cladribine phase III population PD modeling) where data in a limited number of Asian patients in global clinical trials were valuable in advancing Asia‐inclusive development. Furthermore, by building these considerations across the development plan, the design of studies can be adapted to address any questions of regional variability connected with statistical design principles (e.g., exchangeability/non‐exchangeability concepts) to enable confirmatory evidence generation. In summary, the case studies presented here illustrate the successful implementation of ICH E5 and E17 principles for efficient Asia‐inclusive drug development and the importance of timely consideration of ethnic sensitivity through evaluation of drug‐ and disease‐related intrinsic and extrinsic factors in global drug development. All authors contributed to writing various sections of the manuscript, critically reviewed the manuscript, and approved the final version before submission. The study was funded by EMD Serono Research and Development Institute Inc., Billerica, Massachusetts, USA. H.L. and D.L. are employees of Merck Serono Co., Ltd., Beijing, China, an affiliate of Merck KGaA, Darmstadt, Germany. L.K.‐S. and R.S. are employees of the healthcare business of Merck Healthcare KGaA, Darmstadt, Germany. Y.K. is an employee of Merck Biopharma Co., Ltd., Tokyo, Japan, an affiliate of Merck KGaA, Darmstadt, Germany. J.K.M., K.G., J.D., and K.V. are employees of EMD Serono, Billerica, MA, USA. N.T. is an employee of Ares Trading S.A., Lausanne, Switzerland, an affiliate of Merck KGaA, Darmstadt, Germany. J.B. and W.G. were employees of EMD Serono, Billerica, MA, USA, when the study was conducted.
A script‐enabled interactive checklist document for efficient management of electronic devices in a busy multimodality radiotherapy clinic
c75d30eb-e73e-4c64-89c7-ad798a200c7e
10929987
Internal Medicine[mh]
INTRODUCTION Patients undergoing radiotherapy can present with a wide variety of implanted electronic devices that require the radiation oncology care team to assess and, where prudent, implement additional patient‐safety measures due to the potentially harmful effects of radiation on the devices. , These effects have been detailed in the report of the AAPM's Task‐Group 203 (TG‐203) in the context of cardiac implanted electronic devices (CIEDs) and include device malfunction from cumulative dose, dose rate, and neutron‐induced single‐event upset (SEUs). The impact from such effects has led to several consensus guidelines including TG‐203 and others , for CIEDs. The same radiation effects can also affect non‐cardiac devices, though the result of a device failure will vary by the device's function leading to additional guidelines for non‐cardiac devices. , The relative risk of potential failure modes for an irradiated device differs based on the modality of treatment being received. Our clinic treats with pencil‐beam scanning (PBS) proton therapy as well as linac‐based x‐ray therapy. In x‐ray therapy, the out of field dose received by a device is primarily from scattered photons while in proton therapy it is from secondary neutrons. Although the measured neutron dose for PBS is less than both passively‐scattered proton therapy and 18 MV x‐ray therapy, , , a SEU could potentially lead to catastrophic failure, in particular for CIEDs, leading to enhanced treatment guidelines in PBS proton therapy. , , The workflow and experience in our clinic for treating CIED patients with PBS protons has been previously reported. This study's purpose was to design a single clinical checklist document that was flexible to accommodate differing guidelines based on device type and treatment modality, while efficiently streamlining the quick workflow of our busy clinic. This was accomplished using scripting functionality compatible with our clinical radiation oncology information system (ROIS). METHODS 2.1 Operating in a multimodality clinic The external beam therapy modalities available at our clinic consist of 12 Varian C‐arm linear accelerators, two Varian Ethos linear accelerators, and a Hitachi Probeat V synchrotron system feeding to four PBS treatment gantries. Eight of the C‐arm accelerators are at regional satellite facilities, while the rest are at our central campus. For the remainder of this manuscript, a reference to the “x‐ray clinic” includes both the main campus and all satellite facilities which follow the same workflow. In 2022, we treated > 5500 and > 1200 courses of radiation in our x‐ray and proton clinics, respectively, with an average time from simulation to treatment of 3.5 days for the central x‐ray clinic. Our department has implemented multiple internal guideline documents for management of radiotherapy patients with implanted electronic medical devices. Separate decision trees were developed in collaboration with the Cardiology department for patients with cardiac devices being treated with proton therapy due to the higher potential for SEUs compared to ≤10 MV x‐rays. Guidelines for non‐cardiac devices treated with either modality were determined in collaboration with the respective multidisciplinary specialty clinics. 2.2 General workflow and document sections When a new patient enters our clinic's workflow they are asked at several junctions, including during the initial consultation with the treating radiation oncologist, whether they have any implanted medical devices. If the answer is yes, a task is given to the nursing team to determine device type and model and begin the triage process to determine if further action is required. Part of this process is importing a Medical Device Action Plan and Checklist template in Aria v15.6 (Varian, Palo Alto, California, USA), our clinic's ROIS. This document is used for each course of treatment, and it interactively guides the radiation‐oncology team on what actions are needed. The template starts with no selectable fields entered as in Figure and gradually populates tasks for the nursing care team, the medical physics team, and the final action plan based on selections made by the user. The document's action plan is split into four time points of the radiotherapy process which may require a specific action, typically taken by the radiation therapy technologist (RTT) or dosimetry teams. These time points are at initial CT simulation, treatment planning, image guidance used for daily patient alignment, and treatment itself. Additionally, it prompts the care team (which includes physicians, advanced practitioners, and/or registered nurses) to schedule any appropriate appointments in other specialty clinics (e.g., cardiology for CIEDs) and on‐treatment monitoring. Based on the device type and treatment modality, initially selected by the nursing staff, a medical physicist then reviews the document to verify if any steps beyond the standard for the device and modality combination are necessary for the CT simulation and treatment planning action plans. The Action Plan fields are pre‐populated given the interactive nature of the document; however, the medical physicist and other care team members can also enter free text in a comments section for unique cases. After contouring and treatment planning is completed, a medical physicist again interacts with the document to confirm that the treatment planning action plan was followed and to determine if a special medical physics consult is required based on the device type, treatment modality, and the distance from the device to the treatment target. If the device was within the CT scan range, the device would be contoured and the distance from target to device determined by an ESAPI script (independent from the scripts within the checklist document) that calculated the closest distance from the device contour to the 50% isodose line. If the device was not within the CT scan range, the medical physicist would use CT scout images for estimation; including the device in the scouts is one of the pre‐populated action plan items at CT simulation. The RTTs perform a final check of the patient chart before treatment begins. During this process, they confirm that all sections of the document have been completed (reaching out to the appropriate team member if not) and copy any treatment and treatment imaging action plan items into a patient alert that displays each day at treatment. 2.3 Document scripting The checklist document was created using Microsoft Word (Microsoft, Redmond, Washington, USA) with the interactivity governed by macros written in Microsoft's Visual Basic for Applications (VBA) scripting language. Crucially, Aria, by default, allows macro‐enabled Word documents in its Documents workspace which permits all members of the department to interact with one centrally stored and accessible document. Note it is possible to disable macros in Aria 15.6 and instructions for individual clinics to check these settings are included in the Supplementary Material. Microsoft Word provides built in “Content Controls” in the form of check boxes, drop‐down menus, and free‐text fields. Using VBA, macros can be tasked to run when the user enters or exits a content control. Sections of text can be assigned as a “Bookmark” which can be accessed as objects by VBA. The macros read new values from a drop‐down content control and manipulate parts of the document accordingly. Specifically, all potential options for any device were pre‐written, assigned as individual bookmarks, and then set as hidden text. When a specific device and treatment modality are selected, bookmarks relevant to those selections are unhidden by the macros and presented to the user for further action. The interoperability of Aria with interactive Word features additionally extends to bookmarks; if specifically named bookmarks are used, Aria auto‐populates demographic information such as the patient's name, medical record number and the treating physician. The document was developed and tested outside of Aria with new versions uploaded through the Aria Data Administration application; access and rights to Data Administration was required. 2.4 Interactive branching While many of the presented options are simple check boxes to indicate a certain task was performed, there are specific drop‐down selectors whose content greatly changes what is presented to the user. These main branching points are shown by the diagram in Figure . The type of device and treatment clinic are the two primary decisions; the document workflow only begins once both are selected. Only the main branching points are illustrated; some non‐CIED devices also require a distance to determine whether a consultation in the specialty clinic is warranted. The types of devices were sorted into three main categories as seen in Table . If a CIED device is selected, the decision trees diverge rapidly depending on the treatment clinic, how dependent the patient is on the device, and, for the x‐ray clinic, how close the device is to the treatment region. In contrast, the on‐treatment management of endocrinology devices, such as insulin pumps and monitors, is standardized across the treatment modalities. The “other devices” categories span various neurostimulators, infusion pumps, cochlear implants, ventriculoperitoneal shunts, and other non‐specified devices that can be present. Some of these devices could be removed for treatment, resulting in no action plan needed beyond removal, while others have device specific action items, e.g., having a spare pulmonary infusion pump on hand in case of failure. Operating in a multimodality clinic The external beam therapy modalities available at our clinic consist of 12 Varian C‐arm linear accelerators, two Varian Ethos linear accelerators, and a Hitachi Probeat V synchrotron system feeding to four PBS treatment gantries. Eight of the C‐arm accelerators are at regional satellite facilities, while the rest are at our central campus. For the remainder of this manuscript, a reference to the “x‐ray clinic” includes both the main campus and all satellite facilities which follow the same workflow. In 2022, we treated > 5500 and > 1200 courses of radiation in our x‐ray and proton clinics, respectively, with an average time from simulation to treatment of 3.5 days for the central x‐ray clinic. Our department has implemented multiple internal guideline documents for management of radiotherapy patients with implanted electronic medical devices. Separate decision trees were developed in collaboration with the Cardiology department for patients with cardiac devices being treated with proton therapy due to the higher potential for SEUs compared to ≤10 MV x‐rays. Guidelines for non‐cardiac devices treated with either modality were determined in collaboration with the respective multidisciplinary specialty clinics. General workflow and document sections When a new patient enters our clinic's workflow they are asked at several junctions, including during the initial consultation with the treating radiation oncologist, whether they have any implanted medical devices. If the answer is yes, a task is given to the nursing team to determine device type and model and begin the triage process to determine if further action is required. Part of this process is importing a Medical Device Action Plan and Checklist template in Aria v15.6 (Varian, Palo Alto, California, USA), our clinic's ROIS. This document is used for each course of treatment, and it interactively guides the radiation‐oncology team on what actions are needed. The template starts with no selectable fields entered as in Figure and gradually populates tasks for the nursing care team, the medical physics team, and the final action plan based on selections made by the user. The document's action plan is split into four time points of the radiotherapy process which may require a specific action, typically taken by the radiation therapy technologist (RTT) or dosimetry teams. These time points are at initial CT simulation, treatment planning, image guidance used for daily patient alignment, and treatment itself. Additionally, it prompts the care team (which includes physicians, advanced practitioners, and/or registered nurses) to schedule any appropriate appointments in other specialty clinics (e.g., cardiology for CIEDs) and on‐treatment monitoring. Based on the device type and treatment modality, initially selected by the nursing staff, a medical physicist then reviews the document to verify if any steps beyond the standard for the device and modality combination are necessary for the CT simulation and treatment planning action plans. The Action Plan fields are pre‐populated given the interactive nature of the document; however, the medical physicist and other care team members can also enter free text in a comments section for unique cases. After contouring and treatment planning is completed, a medical physicist again interacts with the document to confirm that the treatment planning action plan was followed and to determine if a special medical physics consult is required based on the device type, treatment modality, and the distance from the device to the treatment target. If the device was within the CT scan range, the device would be contoured and the distance from target to device determined by an ESAPI script (independent from the scripts within the checklist document) that calculated the closest distance from the device contour to the 50% isodose line. If the device was not within the CT scan range, the medical physicist would use CT scout images for estimation; including the device in the scouts is one of the pre‐populated action plan items at CT simulation. The RTTs perform a final check of the patient chart before treatment begins. During this process, they confirm that all sections of the document have been completed (reaching out to the appropriate team member if not) and copy any treatment and treatment imaging action plan items into a patient alert that displays each day at treatment. Document scripting The checklist document was created using Microsoft Word (Microsoft, Redmond, Washington, USA) with the interactivity governed by macros written in Microsoft's Visual Basic for Applications (VBA) scripting language. Crucially, Aria, by default, allows macro‐enabled Word documents in its Documents workspace which permits all members of the department to interact with one centrally stored and accessible document. Note it is possible to disable macros in Aria 15.6 and instructions for individual clinics to check these settings are included in the Supplementary Material. Microsoft Word provides built in “Content Controls” in the form of check boxes, drop‐down menus, and free‐text fields. Using VBA, macros can be tasked to run when the user enters or exits a content control. Sections of text can be assigned as a “Bookmark” which can be accessed as objects by VBA. The macros read new values from a drop‐down content control and manipulate parts of the document accordingly. Specifically, all potential options for any device were pre‐written, assigned as individual bookmarks, and then set as hidden text. When a specific device and treatment modality are selected, bookmarks relevant to those selections are unhidden by the macros and presented to the user for further action. The interoperability of Aria with interactive Word features additionally extends to bookmarks; if specifically named bookmarks are used, Aria auto‐populates demographic information such as the patient's name, medical record number and the treating physician. The document was developed and tested outside of Aria with new versions uploaded through the Aria Data Administration application; access and rights to Data Administration was required. Interactive branching While many of the presented options are simple check boxes to indicate a certain task was performed, there are specific drop‐down selectors whose content greatly changes what is presented to the user. These main branching points are shown by the diagram in Figure . The type of device and treatment clinic are the two primary decisions; the document workflow only begins once both are selected. Only the main branching points are illustrated; some non‐CIED devices also require a distance to determine whether a consultation in the specialty clinic is warranted. The types of devices were sorted into three main categories as seen in Table . If a CIED device is selected, the decision trees diverge rapidly depending on the treatment clinic, how dependent the patient is on the device, and, for the x‐ray clinic, how close the device is to the treatment region. In contrast, the on‐treatment management of endocrinology devices, such as insulin pumps and monitors, is standardized across the treatment modalities. The “other devices” categories span various neurostimulators, infusion pumps, cochlear implants, ventriculoperitoneal shunts, and other non‐specified devices that can be present. Some of these devices could be removed for treatment, resulting in no action plan needed beyond removal, while others have device specific action items, e.g., having a spare pulmonary infusion pump on hand in case of failure. RESULTS Two examples of final checklist documents for CIEDs are given in Figure , showing divergent action plans based on treatment with protons as compared with x rays. The text and options shown were auto populated based on the user's selections in the “Device,” “Treatment Clinic,” “Distance to treatment site,” “Pacing Dependent?”, and “Is device within 10 cm of 50% isodose line” options. The figure highlights the different decision trees between the two treatment modalities including differences in planning technique (restriction on beam energy for the x‐ray clinic to ≤ 10 MV to avoid neutron production), on‐treatment monitoring (pulse oximetry vs. daily ECG for the x‐ray and proton clinics, respectively), frequency of device function checks (single mid‐treatment vs. daily for the x‐ray and proton clinics, respectively), and daily image guidance (due to the different volumetric imaging technology available; CBCT and CT‐on‐rails for the x‐ray and proton clinics, respectively). These are generic examples; guidelines and dose limits may vary based on specific manufacturer recommendations. Figure demonstrates two more representative checklist documents that were filled out for an endocrinology device treated in the x‐ray clinic and a pulmonary hypertension infusion pump treated in the proton clinic. These demonstrate how auto populated fields differ for the two non‐CIED branches in Figure . DISCUSSION The multiple guideline documents used by our clinic are in‐depth, detailed, and subsequently long documents to parse. Consulting such detailed guidance can be prone to errors or omissions and AAPM Medical Physics Practice Guideline 4.a recommends a checklist as an approach to mitigate such risk. As such, one of the goals of the interactive checklist was to reduce the frequency at which individual team members would need to consult the full‐length guidance documents. This is especially important since at times it may be weeks between when an individual nurse or medical physicist has a case with a specific type of device. As an automated process is less error prone than manual recall, after the new document's release, some team members were presented with management options for some devices they had not been considering before. In some cases, this led to practice quality improvement efforts to either update the guidelines to remove outdated recommendations or re‐emphasize guidelines that were misremembered. Releasing the interactive document to the full clinic required forethought to training on its use. A peculiarity of how Microsoft Word handles its content controls is that macros only run after the user fully exits the field, either by hitting the “Tab” key or clicking elsewhere in the document. This led to confusion among some of the early testers, when no options were presented, requiring additional training for those who interact with the document. This was accomplished by training a handful of individuals in each work group who then became point people for the rest of their team. Another important consideration in the document's VBA programming was that multiple team members would interact with it at separate times and variables defined and used by the macros could be cleared between uses in Aria. The state of the document thus needed to be checked upon entry of a content control field, and all variables strictly re‐assigned, to ensure the correct script was executed upon exit. One limitation of using scripting in such documents is the linear directionality of the workflow. The further down into the workflow past branching points that a user goes, the more challenging it is from a scripting perspective to accurately change the options if, e.g., the user goes back to the beginning and changes the treatment modality. One such near miss occurred with an early version of the document for a CIED patient who initially was to receive x‐ray treatment but then switched to a proton treatment and the required on‐treatment monitoring was not scheduled until the morning before treatment began. To prevent future occurrences, a safety net was built into the macros such that if the user changes either the device type or treatment modality the document and its fields are reset, forcing the user to start from a fresh checklist. A pop‐up is presented to the user to confirm that they want this to occur. Alternatively, the document can be “errored out” in Aria (hidden by default, but still present in the patient's record) and a new one inserted if there is any uncertainty about making changes later in the pre‐treatment process. Building more “backwards compatibility” into the document is an area for future improvements. The checklist document was constructed to follow the guidance recommendations for a single device type. It is not uncommon, however, for patients to present with more than one device. Our standard procedure, as indicated in the header material in Figure , was to insert and fill out a separate document for each device. One exception to this process was allowed for the common combination of a patient having both an insulin pump and a continuous glucose monitor. The action steps for these devices did not conflict and thus a combined option in the device selection list was given such that options for both devices are presented. Adding more common device combinations as guidance becomes available is an area of ongoing improvement. Another area for future development is to automatically populate the dose limit for the device based on the selected manufacturer if an individual manufacturer has a standard recommendation. If this were to be implemented, reviewing the standard recommendation at some frequency would still be recommended to avoid disconnect if that recommendation were to change in the future. At the release of the checklist document, the medical physicist was responsible for identifying and selecting the dose limit. If none was provided by the manufacturer, our department's standard policy was to limit dose to < 200 cGy (as demonstrated in the left panel of Figure ). Two medical physicists jointly developed the checklist document and the associated macros. To mitigate the likelihood of software bugs reaching the clinically deployed version, when one physicist made a change the other then performed a quality‐control test. The document was also developed using Microsoft's OneDrive for Business application which included built in version‐control functionality. Furthermore, given the potentially divergent nature of recommendations between the x‐ray and proton clinics, one medical physicist from each clinic is involved to stay current with recommendations. This duplicate‐development structure and ongoing internal documentation also made the document's future viability more robust to staff turnover. CONCLUSION An interactive multimodality checklist document was created and clinically deployed for the management of patients with implanted medical devices in a large high‐volume radiotherapy clinic. The document condensed the requirements from multiple comprehensive guidelines, including for CIEDs treated with either x‐ray or proton radiotherapy, into a single location to reduce the amount of time needed by staff to consult and parse the in‐depth documents. The interactivity was accomplished by leveraging the built‐in functionality of Microsoft Word and its VBA scripting environment, something that is accessible to all institutions who use the common document‐creation software. This approach could thus be adopted by other clinics to streamline their own workflow and reduce the burden of treating patients with implanted medical devices. Conception of the document: Eric Brost. Design and software development for the document: Mark Pepin, Eric Brost. Conception and design for the device guidelines: Yolanda Garces, Debra Brinkmann, Kristi Klein. Draft manuscript preparation: Mark Pepin, Eric Brost. All authors reviewed and approved the final version of the manuscript. The authors have no relevant conflicts of interest to disclose.
Efficacy of ultrasound-guided technique for radial artery catheterization in pediatric populations: a systematic review and meta-analysis of randomized controlled trials
c5b7bb6a-ab52-4241-bc7c-1ec9abf556a8
7201726
Pediatrics[mh]
Arterial catheterization is a common and essential procedure performed in many clinical settings, such as the emergency department, intensive care unit, and operating room . It allowed continuous blood pressure monitoring and repeated arterial blood sampling. The radial artery is the most common site for arterial catheterization because of its superficial location, dual arterial supply to the hand, and low rate of complications . Traditionally, the radial artery catheterization is performed by the guidance of anatomical knowledge and pulse palpation. However, a traditional palpation technique can be technically challenging, often requiring multiple attempts and causing patient discomfort and suffering, particularly in pediatric patients or patients with hypotension, edema, and obesity . An ultrasound-guided technique has been commonly used as a good tool for central vein catheterization with the development of ultrasound applications in medicine. A series of studies have confirmed that the use of ultrasound guidance could increase the success rates and reduce the rates of complications, as compared with the traditional palpation technique. With respect to radial arterial catheterization, previous systematic reviews and meta-analyses comparing the ultrasound-guided technique versus the traditional palpation have reported higher first-pass success rates, less time to catheter insertion, and fewer hematomas with ultrasound-guided radial artery access, although several pediatric studies were included in these analyses . However, the use of ultrasound guidance for radial arterial catheterization in pediatric populations has not been well established. A recent systematic review and meta-analysis on arterial cannulation in pediatrics conducted by Aouad-Maroun yielded limited results because they include all arterial cannulation (radial, ulnar, brachial, femoral, or dorsalis pedis artery). Since then, there are two more randomized controlled trials (RCTs) published on this topic. With accumulating evidence, we therefore conducted a systematic review and meta-analysis of RCTs to compare the efficacy of ultrasound-guided technique with the traditional palpation for radial artery catheterization in pediatric patients. This systematic review and meta-analysis was conducted according to the recommendations of the Cochrane Handbook for Systematic Reviews of Interventions and the guidelines established by the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) Group. Ethical approval and patient consent were not required in this study. Literature search The electronic searches were performed by PubMed, Medline, Embase, Clinical Trial.gov registry, Cochrane Central Register of Controlled Trials (CCTR), and Cochrane Database of Systematic Reviews (CDSR) from their date of inception to December 2019. Medical Subject Headings (MeSH) terms and corresponding keywords were used for search with various combinations of the operators “AND” and “OR”: (MeSH exp. “Ultrasonography,” “Ultrasonics,” and keywords “ultrasonography*,” “ultrasonic*,” “ultrasound*,” and “ultrasound-guided”), (MeSH exp. “Radial Artery” and keywords “radial arteries,” “radial artery,” and “radial arterial”), and (MeSH exp. “Catheterization,” “Cannula,” “Catheter,” and keywords “catheterization,” “cannula,” “cannulation,” and “catheter”). We also checked the bibliographies of previous reviews and reviewed the reference lists of all retrieved articles for further identification of potentially relevant studies. Selection criteria The inclusion criteria were as follows: (1) population: pediatric patients (age < 18 years) requiring radial arterial catheterization, (2) intervention: ultrasound-guided technique, (3) comparison: traditional palpation technique, and (4) study design: RCTs. We excluded abstracts, case reports, conference presentations, editorials, and reviews. For duplicate reports containing the same population data, only the one with the longest follow-up and most complete information was included. Data extraction and management Two reviewers (W. Z. and K. L.) independently extracted the data from each article that met the inclusion criteria. The following data were recorded in a standardized form: name of the first author and published year, study period, country of study, age range, sample size, clinical setting, operator experience, ultrasound device, and ultrasound approach. The primary outcomes included the rate of first-attempt and total success of radial arterial catheterization. The mean attempts to success, mean time to success, and incidence of complications were recorded as the secondary outcomes. Any discrepancy was resolved by thorough discussions. Assessment of risk of bias in included studies Two authors (W. Z. and K. L.) assessed the risk of bias independently and in duplicate. We resolved disagreements by consensus or by consultation with a third review author (H.X.). The risk of bias was assessed according to the risk of bias tool of the Cochrane Collaboration. It included six domains: random sequence generation (selection bias); allocation concealment (selection bias); blinding of participants, providers, data collectors, outcome adjudicators, and data analysts (performance bias and detection bias); incomplete outcome data (attrition bias); selective outcome reporting (outcome reporting bias); and other biases. We defined trials as having “low,” “high,” or “unclear” risk of bias and evaluated individual bias items as described in the Cochrane Handbook for Systematic Reviews of Interventions . Statistical analysis Review Manager version 5.3.5 (Cochrane Collaboration, Oxford, UK) was used for all data analysis. The relative ratio (RR) and weighted mean difference (WMD) were used to respectively analyze dichotomous outcome and continuous outcome. Both were reported with 95% confidence interval (CI), and a P value lower than 0.05 or a 95% CI that did not contain unity was considered statistically significant. Heterogeneity was evaluated with the I 2 test, and the I 2 > 50% indicated significant heterogeneity. In this meta-analysis, both fixed- and random-effect models were employed. Since similar results were obtained, only results of the random-effect model are presented. The electronic searches were performed by PubMed, Medline, Embase, Clinical Trial.gov registry, Cochrane Central Register of Controlled Trials (CCTR), and Cochrane Database of Systematic Reviews (CDSR) from their date of inception to December 2019. Medical Subject Headings (MeSH) terms and corresponding keywords were used for search with various combinations of the operators “AND” and “OR”: (MeSH exp. “Ultrasonography,” “Ultrasonics,” and keywords “ultrasonography*,” “ultrasonic*,” “ultrasound*,” and “ultrasound-guided”), (MeSH exp. “Radial Artery” and keywords “radial arteries,” “radial artery,” and “radial arterial”), and (MeSH exp. “Catheterization,” “Cannula,” “Catheter,” and keywords “catheterization,” “cannula,” “cannulation,” and “catheter”). We also checked the bibliographies of previous reviews and reviewed the reference lists of all retrieved articles for further identification of potentially relevant studies. The inclusion criteria were as follows: (1) population: pediatric patients (age < 18 years) requiring radial arterial catheterization, (2) intervention: ultrasound-guided technique, (3) comparison: traditional palpation technique, and (4) study design: RCTs. We excluded abstracts, case reports, conference presentations, editorials, and reviews. For duplicate reports containing the same population data, only the one with the longest follow-up and most complete information was included. Two reviewers (W. Z. and K. L.) independently extracted the data from each article that met the inclusion criteria. The following data were recorded in a standardized form: name of the first author and published year, study period, country of study, age range, sample size, clinical setting, operator experience, ultrasound device, and ultrasound approach. The primary outcomes included the rate of first-attempt and total success of radial arterial catheterization. The mean attempts to success, mean time to success, and incidence of complications were recorded as the secondary outcomes. Any discrepancy was resolved by thorough discussions. Two authors (W. Z. and K. L.) assessed the risk of bias independently and in duplicate. We resolved disagreements by consensus or by consultation with a third review author (H.X.). The risk of bias was assessed according to the risk of bias tool of the Cochrane Collaboration. It included six domains: random sequence generation (selection bias); allocation concealment (selection bias); blinding of participants, providers, data collectors, outcome adjudicators, and data analysts (performance bias and detection bias); incomplete outcome data (attrition bias); selective outcome reporting (outcome reporting bias); and other biases. We defined trials as having “low,” “high,” or “unclear” risk of bias and evaluated individual bias items as described in the Cochrane Handbook for Systematic Reviews of Interventions . Review Manager version 5.3.5 (Cochrane Collaboration, Oxford, UK) was used for all data analysis. The relative ratio (RR) and weighted mean difference (WMD) were used to respectively analyze dichotomous outcome and continuous outcome. Both were reported with 95% confidence interval (CI), and a P value lower than 0.05 or a 95% CI that did not contain unity was considered statistically significant. Heterogeneity was evaluated with the I 2 test, and the I 2 > 50% indicated significant heterogeneity. In this meta-analysis, both fixed- and random-effect models were employed. Since similar results were obtained, only results of the random-effect model are presented. Literature search Two hundred eighty-six articles were identified from electronic databases (excluding duplicates). After application of the inclusion and exclusion criteria, seven studies were finally included in this meta-analysis. All seven studies were randomized controlled trials of radial artery catheterizations. The literature search procedure is shown in Fig. . The seven included studies involved a total of 558 radial artery catheterizations, including 274 ultrasound-guided arterial catheterizations and 284 palpation catheterizations. The main characteristics of the included trials are summarized in Table . Risk of bias in included studies Figure shows the risk of bias summary, which reflects judgments about each risk of bias item for each included study. Overall, three trials were categorized as at low risk of bias, four as unclear, and none as at high risk of bias. Adequate randomized sequence was generated in seven studies, and appropriate allocation concealment was reported in five trials. Blinding of outcome assessments was unclear or seldom reported in these seven trials, but the primary outcome was less prone to be influenced by the lack of blinding. Selective reporting Six of the included studies reported success rate at the first attempt, and all of them gave the total success rate. Only four studies reported the second primary outcome—incidence of complications, and this might indicate selective reporting bias. The secondary outcome—mean time to success—was reported in all these studies, but only three trials showed mean ± standard deviation (SD) and another four did not demonstrate SD. Primary outcome: first-attempt success and total success Six RCTs were used to calculate the pooled estimate for assessing the rate of first-attempt success. Overall, the rate of first-attempt success in the ultrasound-guided group and palpation group was 55.1% and 30.3%, respectively. Ultrasound-guided radial artery catheterization was associated with an increased first-attempt success (RR 1.78, 95% CI 1.46 to 2.18, P < 0.00001, Fig. ), and no significant heterogeneity was shown among these studies ( I 2 = 24%). The rate of total catheterization success was reported in all seven studies. The data demonstrated that the rate of total success was significantly higher in the ultrasound-guided group versus the palpation group (83.9% vs. 62.7%, RR, 1.33; 95% CI 1.20, 1.48; P < 0.00001, Fig. ). However, significant heterogeneity was observed among the included studies for the total success ( I 2 = 67%, Fig. ). Subgroup analysis based on age There was only one trial reporting the data on the elder children. This study involved a wide age range (0–18 years), but most were elder children, with a mean age of 99 months in two groups. Other studies reported data on infants and small children. For the first-attempt success, no difference was detected between studies on elder children (one trial, RR 1.01, 95% CI 0.46 to 2.24), but a significant difference on small children and infants (five trials, RR 1.90, 95% CI 1.55 to 2.33). However, the test for subgroup effects revealed that age-related subgroup differences were not statistically significant ( P = 0.13). In terms of the total success rate, there was also only one study on the elder children and no difference was shown (RR 1.05, 95% CI 0.84 to 1.30). Six trials reported the total success rate on small children and infants, and a significant difference was detected between studies (RR 1.45, 95% CI 1.29 to 1.63) (Fig. ). Subgroup analysis based on the operator’s experience Of the seven studies included, five reported the operator’s experience on the radial arterial catheterization. Only one study reported that no operator had performed more than 10 ultrasound-guided arterial cannulations before this study, and the other studies had experience of more than 10 cases in arterial catheterization technique or familiar with the ultrasound-guided technique for central venous catheterization. Results showed that the ultrasound-guided technique did not significantly increase the success of catheterization at the first-attempt and the total success rate in the pediatric populations as compared with the palpation technique when the operator had minimal experience (one study, RR 1.01, 95% CI 0.46 to 2.24 for first-attempt success; RR 1.05, 95% CI 0.84 to 1.30 for total success). However, in the subgroup of studies in which operators had more experience, the success of catheterization at the first-attempt and the total success were both significantly increased in the ultrasound-guided group (four studies, RR 2.08, 95% CI 1.63 to 2.67 for first-attempt success; RR 1.56, 95% CI 1.34 to 1.81 for total success) (Fig. ). Secondary outcomes Similar to previous studies, ultrasound-guided radial artery catheterization was associated with less mean attempts to success (WMD − 0.96, 95% CI − 1.35 to − 0.56, P < 0.00001, Fig. ), shorter mean time to success (WMD − 98.65 s, 95% CI − 142.02 to − 55.29, P < 0.00001, Fig. ), and lower incidence of hematomas (RR 0.21, 95% CI 0.11 to 0.42, P < 0.00001, Fig. ). Two hundred eighty-six articles were identified from electronic databases (excluding duplicates). After application of the inclusion and exclusion criteria, seven studies were finally included in this meta-analysis. All seven studies were randomized controlled trials of radial artery catheterizations. The literature search procedure is shown in Fig. . The seven included studies involved a total of 558 radial artery catheterizations, including 274 ultrasound-guided arterial catheterizations and 284 palpation catheterizations. The main characteristics of the included trials are summarized in Table . Figure shows the risk of bias summary, which reflects judgments about each risk of bias item for each included study. Overall, three trials were categorized as at low risk of bias, four as unclear, and none as at high risk of bias. Adequate randomized sequence was generated in seven studies, and appropriate allocation concealment was reported in five trials. Blinding of outcome assessments was unclear or seldom reported in these seven trials, but the primary outcome was less prone to be influenced by the lack of blinding. Six of the included studies reported success rate at the first attempt, and all of them gave the total success rate. Only four studies reported the second primary outcome—incidence of complications, and this might indicate selective reporting bias. The secondary outcome—mean time to success—was reported in all these studies, but only three trials showed mean ± standard deviation (SD) and another four did not demonstrate SD. Six RCTs were used to calculate the pooled estimate for assessing the rate of first-attempt success. Overall, the rate of first-attempt success in the ultrasound-guided group and palpation group was 55.1% and 30.3%, respectively. Ultrasound-guided radial artery catheterization was associated with an increased first-attempt success (RR 1.78, 95% CI 1.46 to 2.18, P < 0.00001, Fig. ), and no significant heterogeneity was shown among these studies ( I 2 = 24%). The rate of total catheterization success was reported in all seven studies. The data demonstrated that the rate of total success was significantly higher in the ultrasound-guided group versus the palpation group (83.9% vs. 62.7%, RR, 1.33; 95% CI 1.20, 1.48; P < 0.00001, Fig. ). However, significant heterogeneity was observed among the included studies for the total success ( I 2 = 67%, Fig. ). There was only one trial reporting the data on the elder children. This study involved a wide age range (0–18 years), but most were elder children, with a mean age of 99 months in two groups. Other studies reported data on infants and small children. For the first-attempt success, no difference was detected between studies on elder children (one trial, RR 1.01, 95% CI 0.46 to 2.24), but a significant difference on small children and infants (five trials, RR 1.90, 95% CI 1.55 to 2.33). However, the test for subgroup effects revealed that age-related subgroup differences were not statistically significant ( P = 0.13). In terms of the total success rate, there was also only one study on the elder children and no difference was shown (RR 1.05, 95% CI 0.84 to 1.30). Six trials reported the total success rate on small children and infants, and a significant difference was detected between studies (RR 1.45, 95% CI 1.29 to 1.63) (Fig. ). Of the seven studies included, five reported the operator’s experience on the radial arterial catheterization. Only one study reported that no operator had performed more than 10 ultrasound-guided arterial cannulations before this study, and the other studies had experience of more than 10 cases in arterial catheterization technique or familiar with the ultrasound-guided technique for central venous catheterization. Results showed that the ultrasound-guided technique did not significantly increase the success of catheterization at the first-attempt and the total success rate in the pediatric populations as compared with the palpation technique when the operator had minimal experience (one study, RR 1.01, 95% CI 0.46 to 2.24 for first-attempt success; RR 1.05, 95% CI 0.84 to 1.30 for total success). However, in the subgroup of studies in which operators had more experience, the success of catheterization at the first-attempt and the total success were both significantly increased in the ultrasound-guided group (four studies, RR 2.08, 95% CI 1.63 to 2.67 for first-attempt success; RR 1.56, 95% CI 1.34 to 1.81 for total success) (Fig. ). Similar to previous studies, ultrasound-guided radial artery catheterization was associated with less mean attempts to success (WMD − 0.96, 95% CI − 1.35 to − 0.56, P < 0.00001, Fig. ), shorter mean time to success (WMD − 98.65 s, 95% CI − 142.02 to − 55.29, P < 0.00001, Fig. ), and lower incidence of hematomas (RR 0.21, 95% CI 0.11 to 0.42, P < 0.00001, Fig. ). This is a further systematic review and meta-analysis of seven RCTs to evaluate the efficacy of the ultrasound-guided technique for radial arterial catheterization in pediatric populations. From the available data, the present meta-analysis showed that the ultrasound-guided technique was associated with a higher rate of first-attempt and total success in radial arterial catheterization for pediatric patients compared with the traditional palpation technique. Additionally, ultrasound-guided radial artery catheterization significantly reduced mean attempts to success, mean time to success, and incidence of the complication of hematoma. Since the use of ultrasound guidance was first reported in arterial catheterization by Nagabhushan et al. in 1976, ultrasound guidance has been increasingly used for arterial catheterization. Several reports were published to demonstrate the advantage of the ultrasound-guided technique for the insertion of an arterial catheter in adult populations . A recent meta-analysis conducted by Aouad-Maroun and his colleagues aimed to compare the ultrasound-guided technique with other techniques (including the traditional palpation technique and Doppler) for arterial catheterization in pediatric patients. However, the low number of RCTs included in these studies made the evidence level relatively low. The high degree of heterogeneities was existed in these studies because of the inclusion of Ueda’s research comparing ultrasound with Doppler, which may lead to higher biases. Therefore, another meta-analysis is required to evaluate the curative effect of the ultrasound-guided technique versus the traditional palpation. This meta-analysis of comparative studies investigated the ultrasound-guided technique versus the traditional palpation technique for radial artery catheterization in pediatric populations. The results of the present review confirmed previously reported advantages of the ultrasound-guided technique in pediatric patients. The use of ultrasound guidance for radial arterial catheterization could increase the rates of first-attempt and total success and reduce the incidence of complications. Hansen et al. attributed that the ultrasound-guided technique could identify the target vessel, collateral vasculature, and nervous structures with real-time guidance of catheter insertion for arterial catheterization. Controversy remains on which is better, the short-axis out-of-plane technique or the long-axis in-plane technique, for radial arterial catheterization . Sethi et al. found that the identification of the midpoint of the radial artery on a short-axis view was probably easier with the out-of-plane technique. This may explain why the short-axis was used in most of these studies included in our meta-analysis. Technically, the operator’s experience plays an important role in using ultrasound guidance for radial arterial catheterization. Recent guidelines have recognized that ultrasound-guided cannulation rates are higher when trainees have developed general experience, skill, and dexterity . The data from this present study suggested that ultrasound guidance significantly increased the first-attempt success rate when performed by an experienced operator (RR 1.98, 95% CI 1.04–3.77) versus an inexperienced operator (RR 1.36, 95% CI 0.84–2.20). This was consistent with the previous report that ultrasound guidance might be particularly useful in the most experienced operators for catheterization and inexperience may have prevented operators from realizing its full benefit. Catheterization of the radial artery can be technically challenging in small children and infants due to the small vessel diameter, even for experienced operators, especially after repeated unsuccessful attempts causing complications such as hemorrhage and hematoma formation . In this meta-analysis, the results showed that ultrasound-guided radial artery catheterization in small children and infants could increase the rate of first-attempt success and total success when compared with the traditional palpation technique. There was only one study reporting about the elder children, and the data demonstrated that ultrasound guidance did not provide a higher success rate for the radial artery in elder children. However, the operators in this study were inexperienced and lacked training, which may influence the real effect of ultrasound guidance in the radial artery catheterization. We further took the results of the mean attempts to success and mean time to success for assessing the effects of the ultrasound-guided radial artery catheterization. The results showed that the ultrasound-guided technique could also significantly reduce the mean attempts to success and mean time to success in radial arterial catheterization for pediatric populations compared to the traditional palpation technique. As we know, a meta-analysis was a quantitative method that combined the data from several independent studies and researches on the same problem, pooling outcomes to achieve a more unbiased and scientific conclusion . However, there were also several limitations existing in this present meta-analysis. First, the sample sizes were small in most of the included studies, which would decrease the overall precision of the estimates. Second, the RCTs in our meta-analysis were performed in different clinical settings and various patient groups, which may result in significant heterogeneity among the reviewed studies. Furthermore, other clinically relevant endpoints, such as patient pain and patient and physician satisfaction, were not assessed. The results of the current meta-analysis suggested that the ultrasound-guided technique was associated with higher rates of first-attempt and total success and lower incidence of hematoma compared with the traditional palpation technique. Ultrasound guidance is an effective and safe technique for radial artery catheterization, especially in small children and infants, and could be recommended to aid radial arterial catheterization. However, the results should be interpreted cautiously due to the heterogeneity among the studies.